status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,267 |
unarchive: io_buffer_size does not work
|
### Summary
The option `io_buffer_size` added in ansible-core 2.12 (#74094) does not appear in the argument spec and thus does not work. (As opposed to the other two documented options that are not in the argument spec, this one is not handled by the accomodating action plugin.)
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
2.12+
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
Argument spec validation will reject this option.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77267
|
https://github.com/ansible/ansible/pull/77271
|
c555ce1bd90e95cca0d26259de27399d9bf18da4
|
e3c72230cda45798b4d9bd98c7f296d2895c4027
| 2022-03-11T21:08:12Z |
python
| 2022-03-17T19:15:13Z |
lib/ansible/modules/unarchive.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2013, Dylan Martin <[email protected]>
# Copyright: (c) 2015, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2016, Dag Wieers <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: unarchive
version_added: '1.4'
short_description: Unpacks an archive after (optionally) copying it from the local machine
description:
- The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive.
- By default, it will copy the source file from the local system to the target before unpacking.
- Set C(remote_src=yes) to unpack an archive which already exists on the target.
- If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes).
- For Windows targets, use the M(community.windows.win_unzip) module instead.
options:
src:
description:
- If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the
target server to existing archive file to unpack.
- If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for
simple cases, for full download support use the M(ansible.builtin.get_url) module.
type: path
required: true
dest:
description:
- Remote absolute path where the archive should be unpacked.
type: path
required: true
copy:
description:
- If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine.
- This option has been deprecated in favor of C(remote_src).
- This option is mutually exclusive with C(remote_src).
type: bool
default: yes
creates:
description:
- If the specified absolute path (file or directory) already exists, this step will B(not) be run.
type: path
version_added: "1.6"
io_buffer_size:
description:
- Size of the volatile memory buffer that is used for extracting files from the archive in bytes.
type: int
default: 64 KiB
version_added: "2.12"
list_files:
description:
- If set to True, return the list of files that are contained in the tarball.
type: bool
default: no
version_added: "2.0"
exclude:
description:
- List the directory and file entries that you would like to exclude from the unarchive action.
- Mutually exclusive with C(include).
type: list
default: []
elements: str
version_added: "2.1"
include:
description:
- List of directory and file entries that you would like to extract from the archive. If C(include)
is not empty, only files listed here will be extracted.
- Mutually exclusive with C(exclude).
type: list
default: []
elements: str
version_added: "2.11"
keep_newer:
description:
- Do not replace existing files that are newer than files from the archive.
type: bool
default: no
version_added: "2.1"
extra_opts:
description:
- Specify additional options by passing in an array.
- Each space-separated command-line option should be a new element of the array. See examples.
- Command-line options with multiple elements must use multiple lines in the array, one for each element.
type: list
elements: str
default: ""
version_added: "2.1"
remote_src:
description:
- Set to C(yes) to indicate the archived file is already on the remote system and not local to the Ansible controller.
- This option is mutually exclusive with C(copy).
type: bool
default: no
version_added: "2.2"
validate_certs:
description:
- This only applies if using a https URL as the source of the file.
- This should only set to C(no) used on personally controlled sites using self-signed certificate.
- Prior to 2.2 the code worked as if this was set to C(yes).
type: bool
default: yes
version_added: "2.2"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
- action_common_attributes.files
- decrypt
- files
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: partial
details: Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive.
platform:
platforms: posix
safe_file_operations:
support: none
vault:
support: full
todo:
- Re-implement tar support using native tarfile module.
- Re-implement zip support using native zipfile module.
notes:
- Requires C(zipinfo) and C(gtar)/C(unzip) command on target host.
- Requires C(zstd) command on target host to expand I(.tar.zst) files.
- Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2), I(.tar.xz), and I(.tar.zst) files using C(gtar).
- Does not handle I(.gz) files, I(.bz2) files, I(.xz), or I(.zst) files that do not contain a I(.tar) archive.
- Existing files/directories in the destination which are not in the archive
are not touched. This is the same behavior as a normal archive extraction.
- Existing files/directories in the destination which are not in the archive
are ignored for purposes of deciding if the archive should be unpacked or not.
seealso:
- module: community.general.archive
- module: community.general.iso_extract
- module: community.windows.win_unzip
author: Michael DeHaan
'''
EXAMPLES = r'''
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /var/lib/foo
- name: Unarchive a file that is already on the remote machine
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file that needs to be downloaded (added in 2.0)
ansible.builtin.unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file with extra options
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
extra_opts:
- --transform
- s/^xxx/yyy/
'''
RETURN = r'''
dest:
description: Path to the destination directory.
returned: always
type: str
sample: /opt/software
files:
description: List of all the files in the archive.
returned: When I(list_files) is True
type: list
sample: '["file1", "file2"]'
gid:
description: Numerical ID of the group that owns the destination directory.
returned: always
type: int
sample: 1000
group:
description: Name of the group that owns the destination directory.
returned: always
type: str
sample: "librarians"
handler:
description: Archive software handler used to extract and decompress the archive.
returned: always
type: str
sample: "TgzArchive"
mode:
description: String that represents the octal permissions of the destination directory.
returned: always
type: str
sample: "0755"
owner:
description: Name of the user that owns the destination directory.
returned: always
type: str
sample: "paul"
size:
description: The size of destination directory in bytes. Does not include the size of files or subdirectories contained within.
returned: always
type: int
sample: 36
src:
description:
- The source archive's path.
- If I(src) was a remote web URL, or from the local ansible controller, this shows the temporary location where the download was stored.
returned: always
type: str
sample: "/home/paul/test.tar.gz"
state:
description: State of the destination. Effectively always "directory".
returned: always
type: str
sample: "directory"
uid:
description: Numerical ID of the user that owns the destination directory.
returned: always
type: int
sample: 1000
'''
import binascii
import codecs
import datetime
import fnmatch
import grp
import os
import platform
import pwd
import re
import stat
import time
import traceback
from functools import partial
from zipfile import ZipFile, BadZipfile
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_file
try: # python 3.3+
from shlex import quote # type: ignore[attr-defined]
except ImportError: # older python
from pipes import quote
# String from tar that shows the tar contents are different from the
# filesystem
OWNER_DIFF_RE = re.compile(r': Uid differs$')
GROUP_DIFF_RE = re.compile(r': Gid differs$')
MODE_DIFF_RE = re.compile(r': Mode differs$')
MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$')
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$')
MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$')
ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}')
INVALID_OWNER_RE = re.compile(r': Invalid owner')
INVALID_GROUP_RE = re.compile(r': Invalid group')
def crc32(path, buffer_size):
''' Return a CRC32 checksum of a file '''
crc = binascii.crc32(b'')
with open(path, 'rb') as f:
for b_block in iter(partial(f.read, buffer_size), b''):
crc = binascii.crc32(b_block, crc)
return crc & 0xffffffff
def shell_escape(string):
''' Quote meta-characters in the args for the unix shell '''
return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string)
class UnarchiveError(Exception):
pass
class ZipArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
self.io_buffer_size = module.params.get("io_buffer_size", 64 * 1024)
self.excludes = module.params['exclude']
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
self.zipinfo_cmd_path = None
self._files_in_archive = []
self._infodict = dict()
def _permstr_to_octal(self, modestr, umask):
''' Convert a Unix permission string (rw-r--r--) into a mode (0644) '''
revstr = modestr[::-1]
mode = 0
for j in range(0, 3):
for i in range(0, 3):
if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']:
mode += 2 ** (i + 3 * j)
# The unzip utility does not support setting the stST bits
# if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]:
# mode += 2 ** (9 + j)
return (mode & ~umask)
def _legacy_file_list(self):
rc, out, err = self.module.run_command([self.cmd_path, '-v', self.src])
if rc:
raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src)
for line in out.splitlines()[3:-2]:
fields = line.split(None, 7)
self._files_in_archive.append(fields[7])
self._infodict[fields[7]] = int(fields[6])
def _crc32(self, path):
if self._infodict:
return self._infodict[path]
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for item in archive.infolist():
self._infodict[item.filename] = int(item.CRC)
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
return self._infodict[path]
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
self._files_in_archive = []
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for member in archive.namelist():
if self.include_files:
for include in self.include_files:
if fnmatch.fnmatch(member, include):
self._files_in_archive.append(to_native(member))
else:
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(member, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(member))
except Exception as e:
archive.close()
raise UnarchiveError('Unable to list files in the archive: %s' % to_native(e))
archive.close()
return self._files_in_archive
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
rc, out, err = self.module.run_command(cmd)
old_out = out
diff = ''
out = ''
if rc == 0:
unarchived = True
else:
unarchived = False
# Get some information related to user/group ownership
umask = os.umask(0)
os.umask(umask)
systemtype = platform.system()
# Get current user and group information
groups = os.getgroups()
run_uid = os.getuid()
run_gid = os.getgid()
try:
run_owner = pwd.getpwuid(run_uid).pw_name
except (TypeError, KeyError):
run_owner = run_uid
try:
run_group = grp.getgrgid(run_gid).gr_name
except (KeyError, ValueError, OverflowError):
run_group = run_gid
# Get future user ownership
fut_owner = fut_uid = None
if self.file_args['owner']:
try:
tpw = pwd.getpwnam(self.file_args['owner'])
except KeyError:
try:
tpw = pwd.getpwuid(int(self.file_args['owner']))
except (TypeError, KeyError, ValueError):
tpw = pwd.getpwuid(run_uid)
fut_owner = tpw.pw_name
fut_uid = tpw.pw_uid
else:
try:
fut_owner = run_owner
except Exception:
pass
fut_uid = run_uid
# Get future group ownership
fut_group = fut_gid = None
if self.file_args['group']:
try:
tgr = grp.getgrnam(self.file_args['group'])
except (ValueError, KeyError):
try:
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
tgr = grp.getgrgid(int(self.file_args['group']))
except (KeyError, ValueError, OverflowError):
tgr = grp.getgrgid(run_gid)
fut_group = tgr.gr_name
fut_gid = tgr.gr_gid
else:
try:
fut_group = run_group
except Exception:
pass
fut_gid = run_gid
for line in old_out.splitlines():
change = False
pcs = line.split(None, 7)
if len(pcs) != 8:
# Too few fields... probably a piece of the header or footer
continue
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
if len(pcs[6]) != 15:
continue
# Possible entries:
# -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660
# -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs
# -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF
# --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr
if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'):
continue
ztype = pcs[0][0]
permstr = pcs[0][1:]
version = pcs[1]
ostype = pcs[2]
size = int(pcs[3])
path = to_text(pcs[7], errors='surrogate_or_strict')
# Skip excluded files
if path in self.excludes:
out += 'Path %s is excluded on request\n' % path
continue
# Itemized change requires L for symlink
if path[-1] == '/':
if ztype != 'd':
err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype)
ftype = 'd'
elif ztype == 'l':
ftype = 'L'
elif ztype == '-':
ftype = 'f'
elif ztype == '?':
ftype = 'f'
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666.
# This permission will then be modified by the system UMask.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal
if len(permstr) == 6:
if path[-1] == '/':
permstr = 'rwxrwxrwx'
elif permstr == 'rwx---':
permstr = 'rwxrwxrwx'
else:
permstr = 'rw-rw-rw-'
file_umask = umask
elif 'bsd' in systemtype.lower():
file_umask = umask
else:
file_umask = 0
# Test string conformity
if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr):
raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr)
# DEBUG
# err += "%s%s %10d %s\n" % (ztype, permstr, size, path)
b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict'))
try:
st = os.lstat(b_dest)
except Exception:
change = True
self.includes.append(path)
err += 'Path %s is missing\n' % path
diff += '>%s++++++.?? %s\n' % (ftype, path)
continue
# Compare file types
if ftype == 'd' and not stat.S_ISDIR(st.st_mode):
change = True
self.includes.append(path)
err += 'File %s already exists, but not as a directory\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'f' and not stat.S_ISREG(st.st_mode):
change = True
unarchived = False
self.includes.append(path)
err += 'Directory %s already exists, but not as a regular file\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'L' and not stat.S_ISLNK(st.st_mode):
change = True
self.includes.append(path)
err += 'Directory %s already exists, but not as a symlink\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
itemized = list('.%s.......??' % ftype)
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6]))
timestamp = time.mktime(dt_object.timetuple())
# Compare file timestamps
if stat.S_ISREG(st.st_mode):
if self.module.params['keep_newer']:
if timestamp > st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s is older, replacing file\n' % path
itemized[4] = 't'
elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime:
# Add to excluded files, ignore other changes
out += 'File %s is newer, excluding file\n' % path
self.excludes.append(path)
continue
else:
if timestamp != st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime)
itemized[4] = 't'
# Compare file sizes
if stat.S_ISREG(st.st_mode) and size != st.st_size:
change = True
err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size)
itemized[3] = 's'
# Compare file checksums
if stat.S_ISREG(st.st_mode):
crc = crc32(b_dest, self.io_buffer_size)
if crc != self._crc32(path):
change = True
err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc)
itemized[2] = 'c'
# Compare file permissions
# Do not handle permissions of symlinks
if ftype != 'L':
# Use the new mode provided with the action, if there is one
if self.file_args['mode']:
if isinstance(self.file_args['mode'], int):
mode = self.file_args['mode']
else:
try:
mode = int(self.file_args['mode'], 8)
except Exception as e:
try:
mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode'])
except ValueError as e:
self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc())
# Only special files require no umask-handling
elif ztype == '?':
mode = self._permstr_to_octal(permstr, 0)
else:
mode = self._permstr_to_octal(permstr, file_umask)
if mode != stat.S_IMODE(st.st_mode):
change = True
itemized[5] = 'p'
err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode))
# Compare file user ownership
owner = uid = None
try:
owner = pwd.getpwuid(st.st_uid).pw_name
except (TypeError, KeyError):
uid = st.st_uid
# If we are not root and requested owner is not our user, fail
if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid):
raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner))
if owner and owner != fut_owner:
change = True
err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner)
itemized[6] = 'o'
elif uid and uid != fut_uid:
change = True
err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid)
itemized[6] = 'o'
# Compare file group ownership
group = gid = None
try:
group = grp.getgrgid(st.st_gid).gr_name
except (KeyError, ValueError, OverflowError):
gid = st.st_gid
if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups:
raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner))
if group and group != fut_group:
change = True
err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group)
itemized[6] = 'g'
elif gid and gid != fut_gid:
change = True
err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid)
itemized[6] = 'g'
# Register changed files and finalize diff output
if change:
if path not in self.includes:
self.includes.append(path)
diff += '%s %s\n' % (''.join(itemized), path)
if self.includes:
unarchived = False
# DEBUG
# out = old_out + out
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff)
def unarchive(self):
cmd = [self.cmd_path, '-o']
if self.opts:
cmd.extend(self.opts)
cmd.append(self.src)
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
if self.excludes:
cmd.extend(['-x'] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
cmd.extend(['-d', self.b_dest])
rc, out, err = self.module.run_command(cmd)
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
binaries = (
('unzip', 'cmd_path'),
('zipinfo', 'zipinfo_cmd_path'),
)
missing = []
for b in binaries:
try:
setattr(self, b[1], get_bin_path(b[0]))
except ValueError:
missing.append(b[0])
if missing:
return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True, None
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, err)
class TgzArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
if self.module.check_mode:
self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name)
self.excludes = [path.rstrip('/') for path in self.module.params['exclude']]
self.include_files = self.module.params['include']
self.cmd_path = None
self.tar_type = None
self.zipflag = '-z'
self._files_in_archive = []
def _get_tar_type(self):
cmd = [self.cmd_path, '--version']
(rc, out, err) = self.module.run_command(cmd)
tar_type = None
if out.startswith('bsdtar'):
tar_type = 'bsd'
elif out.startswith('tar') and 'GNU' in out:
tar_type = 'gnu'
return tar_type
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
cmd = [self.cmd_path, '--list', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
if rc != 0:
raise UnarchiveError('Unable to list files in the archive: %s' % err)
for filename in out.splitlines():
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
filename = to_native(codecs.escape_decode(filename)[0])
# We don't allow absolute filenames. If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'". This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
if filename.startswith('/'):
filename = filename[1:]
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(filename, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(filename))
return self._files_in_archive
def is_unarchived(self):
cmd = [self.cmd_path, '--diff', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
# Check whether the differences are in something that we're
# setting anyway
# What is different
unarchived = True
old_out = out
out = ''
run_uid = os.getuid()
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
for line in old_out.splitlines() + err.splitlines():
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
if EMPTY_FILE_RE.search(line):
continue
if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line):
out += line + '\n'
if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line):
out += line + '\n'
if not self.file_args['mode'] and MODE_DIFF_RE.search(line):
out += line + '\n'
if MOD_TIME_DIFF_RE.search(line):
out += line + '\n'
if MISSING_FILE_RE.search(line):
out += line + '\n'
if INVALID_OWNER_RE.search(line):
out += line + '\n'
if INVALID_GROUP_RE.search(line):
out += line + '\n'
if out:
unarchived = False
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd)
def unarchive(self):
cmd = [self.cmd_path, '--extract', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
try:
self.cmd_path = get_bin_path('gtar')
except ValueError:
# Fallback to tar
try:
self.cmd_path = get_bin_path('tar')
except ValueError:
return False, "Unable to find required 'gtar' or 'tar' binary in the path"
self.tar_type = self._get_tar_type()
if self.tar_type != 'gnu':
return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type)
try:
if self.files_in_archive:
return True, None
except UnarchiveError as e:
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, to_native(e))
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path
# Class to handle tar files that aren't compressed
class TarArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarArchive, self).__init__(src, b_dest, file_args, module)
# argument to tar
self.zipflag = ''
# Class to handle bzip2 compressed tar files
class TarBzipArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarBzipArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-j'
# Class to handle xz compressed tar files
class TarXzArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarXzArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-J'
# Class to handle zstd compressed tar files
class TarZstdArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarZstdArchive, self).__init__(src, b_dest, file_args, module)
# GNU Tar supports the --use-compress-program option to
# specify which executable to use for
# compression/decompression.
#
# Note: some flavors of BSD tar support --zstd (e.g., FreeBSD
# 12.2), but the TgzArchive class only supports GNU Tar.
self.zipflag = '--use-compress-program=zstd'
# try handlers in order and return the one that works or bail if none work
def pick_handler(src, dest, file_args, module):
handlers = [ZipArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive, TarZstdArchive]
reasons = set()
for handler in handlers:
obj = handler(src, dest, file_args, module)
(can_handle, reason) = obj.can_handle_archive()
if can_handle:
return obj
reasons.add(reason)
reason_msg = '\n'.join(reasons)
module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed.\n%s' % (src, reason_msg))
def main():
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path', required=True),
dest=dict(type='path', required=True),
remote_src=dict(type='bool', default=False),
creates=dict(type='path'),
list_files=dict(type='bool', default=False),
keep_newer=dict(type='bool', default=False),
exclude=dict(type='list', elements='str', default=[]),
include=dict(type='list', elements='str', default=[]),
extra_opts=dict(type='list', elements='str', default=[]),
validate_certs=dict(type='bool', default=True),
),
add_file_common_args=True,
# check-mode only works for zip files, we cover that later
supports_check_mode=True,
mutually_exclusive=[('include', 'exclude')],
)
src = module.params['src']
dest = module.params['dest']
b_dest = to_bytes(dest, errors='surrogate_or_strict')
remote_src = module.params['remote_src']
file_args = module.load_file_common_arguments(module.params)
# did tar file arrive?
if not os.path.exists(src):
if not remote_src:
module.fail_json(msg="Source '%s' failed to transfer" % src)
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
elif '://' in src:
src = fetch_file(module, src)
else:
module.fail_json(msg="Source '%s' does not exist" % src)
if not os.access(src, os.R_OK):
module.fail_json(msg="Source '%s' not readable" % src)
# skip working with 0 size archives
try:
if os.path.getsize(src) == 0:
module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src)
except Exception as e:
module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e)))
# is dest OK to receive tar file?
if not os.path.isdir(b_dest):
module.fail_json(msg="Destination '%s' is not a directory" % dest)
handler = pick_handler(src, b_dest, file_args, module)
res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src)
# do we need to do unpack?
check_results = handler.is_unarchived()
# DEBUG
# res_args['check_results'] = check_results
if module.check_mode:
res_args['changed'] = not check_results['unarchived']
elif check_results['unarchived']:
res_args['changed'] = False
else:
# do the unpack
try:
res_args['extract_results'] = handler.unarchive()
if res_args['extract_results']['rc'] != 0:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
except IOError:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
else:
res_args['changed'] = True
# Get diff if required
if check_results.get('diff', False):
res_args['diff'] = {'prepared': check_results['diff']}
# Run only if we found differences (idempotence) or diff was missing
if res_args.get('diff', True) and not module.check_mode:
# do we need to change perms?
top_folders = []
for filename in handler.files_in_archive:
file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if '/' in filename:
top_folder_path = filename.split('/')[0]
if top_folder_path not in top_folders:
top_folders.append(top_folder_path)
# make sure top folders have the right permissions
# https://github.com/ansible/ansible/issues/35426
if top_folders:
for f in top_folders:
file_args['path'] = "%s/%s" % (dest, f)
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if module.params['list_files']:
res_args['files'] = handler.files_in_archive
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,267 |
unarchive: io_buffer_size does not work
|
### Summary
The option `io_buffer_size` added in ansible-core 2.12 (#74094) does not appear in the argument spec and thus does not work. (As opposed to the other two documented options that are not in the argument spec, this one is not handled by the accomodating action plugin.)
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
2.12+
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
Argument spec validation will reject this option.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77267
|
https://github.com/ansible/ansible/pull/77271
|
c555ce1bd90e95cca0d26259de27399d9bf18da4
|
e3c72230cda45798b4d9bd98c7f296d2895c4027
| 2022-03-11T21:08:12Z |
python
| 2022-03-17T19:15:13Z |
test/sanity/ignore.txt
|
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
docs/docsite/rst/locales/ja/LC_MESSAGES/dev_guide.po no-smart-quotes # Translation of the no-smart-quotes rule
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/galaxy.py import-3.8 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.8 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.8 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.8 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.8 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/providers.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/reporters.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/resolvers.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/versioning.py import-3.8 # circular imports
lib/ansible/cli/galaxy.py import-3.9 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.9 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.9 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.9 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.9 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/providers.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/reporters.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/resolvers.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/versioning.py import-3.9 # circular imports
lib/ansible/cli/galaxy.py import-3.10 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.10 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.10 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.10 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.10 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/providers.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/reporters.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/resolvers.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/versioning.py import-3.10 # circular imports
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:disallowed-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:disallowed-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:disallowed-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:disallowed-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:disallowed-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:disallowed-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:disallowed-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:disallowed-name
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pylint:using-constant-test # bundled code we don't want to modify
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:disallowed-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:arguments-renamed
lib/ansible/module_utils/urls.py pylint:disallowed-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/parsing/vault/__init__.py pylint:disallowed-name
lib/ansible/parsing/yaml/objects.py pylint:arguments-renamed
lib/ansible/playbook/base.py pylint:disallowed-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:disallowed-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/__init__.py pylint:arguments-renamed
lib/ansible/plugins/inventory/advanced_host_list.py pylint:arguments-renamed
lib/ansible/plugins/inventory/host_list.py pylint:arguments-renamed
lib/ansible/plugins/lookup/random_choice.py pylint:arguments-renamed
lib/ansible/plugins/lookup/sequence.py pylint:disallowed-name
lib/ansible/plugins/shell/cmd.py pylint:arguments-renamed
lib/ansible/plugins/strategy/__init__.py pylint:disallowed-name
lib/ansible/plugins/strategy/linear.py pylint:disallowed-name
lib/ansible/utils/collection_loader/_collection_finder.py pylint:deprecated-class
lib/ansible/utils/collection_loader/_collection_meta.py pylint:deprecated-class
lib/ansible/vars/hostvars.py pylint:disallowed-name
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:disallowed-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/lib/ansible_test/_data/requirements/sanity.pslint.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_util/target/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_util/target/setup/requirements.py replace-urlopen
test/support/integration/plugins/inventory/aws_ec2.py pylint:use-a-generator
test/support/integration/plugins/modules/ec2_group.py pylint:use-a-generator
test/support/integration/plugins/modules/timezone.py pylint:disallowed-name
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/filter/network.py pylint:consider-using-dict-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/cliconf/vyos.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:disallowed-name
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/module_utils/WebRequest.psm1 pslint!skip
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/modules/win_uri.ps1 pslint!skip
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/slurp.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_acl.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_certificate_store.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_command.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_file.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_get_url.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_stat.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_tempfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user_right.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_whoami.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:disallowed-name
test/units/modules/test_apt.py pylint:disallowed-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:disallowed-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/parsing/vault/test_vault.py pylint:disallowed-name
test/units/playbook/role/test_role.py pylint:disallowed-name
test/units/plugins/test_plugins.py pylint:disallowed-name
test/units/template/test_templar.py pylint:disallowed-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
lib/ansible/module_utils/six/__init__.py mypy-2.7:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:var-annotated # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-2.7:arg-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.5:valid-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.6:valid-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.7:valid-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-2.7:assignment # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-2.7:attr-defined # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.5:attr-defined # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.6:attr-defined # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.7:attr-defined # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-2.7:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.5:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.6:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.7:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.8:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.9:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.10:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-2.7:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.5:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.6:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.7:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.8:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.9:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.10:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-2.7:attr-defined # vendored code
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,267 |
unarchive: io_buffer_size does not work
|
### Summary
The option `io_buffer_size` added in ansible-core 2.12 (#74094) does not appear in the argument spec and thus does not work. (As opposed to the other two documented options that are not in the argument spec, this one is not handled by the accomodating action plugin.)
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
2.12+
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
Argument spec validation will reject this option.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77267
|
https://github.com/ansible/ansible/pull/77271
|
c555ce1bd90e95cca0d26259de27399d9bf18da4
|
e3c72230cda45798b4d9bd98c7f296d2895c4027
| 2022-03-11T21:08:12Z |
python
| 2022-03-17T19:15:13Z |
test/units/modules/test_unarchive.py
|
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import pytest
from ansible.modules.unarchive import ZipArchive, TgzArchive
class AnsibleModuleExit(Exception):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
class ExitJson(AnsibleModuleExit):
pass
class FailJson(AnsibleModuleExit):
pass
@pytest.fixture
def fake_ansible_module():
return FakeAnsibleModule()
class FakeAnsibleModule:
def __init__(self):
self.params = {}
self.tmpdir = None
def exit_json(self, *args, **kwargs):
raise ExitJson(*args, **kwargs)
def fail_json(self, *args, **kwargs):
raise FailJson(*args, **kwargs)
class TestCaseZipArchive:
@pytest.mark.parametrize(
'side_effect, expected_reason', (
([ValueError, '/bin/zipinfo'], "Unable to find required 'unzip'"),
(ValueError, "Unable to find required 'unzip' or 'zipinfo'"),
)
)
def test_no_zip_zipinfo_binary(self, mocker, fake_ansible_module, side_effect, expected_reason):
mocker.patch("ansible.modules.unarchive.get_bin_path", side_effect=side_effect)
fake_ansible_module.params = {
"extra_opts": "",
"exclude": "",
"include": "",
}
z = ZipArchive(
src="",
b_dest="",
file_args="",
module=fake_ansible_module,
)
can_handle, reason = z.can_handle_archive()
assert can_handle is False
assert expected_reason in reason
assert z.cmd_path is None
class TestCaseTgzArchive:
def test_no_tar_binary(self, mocker, fake_ansible_module):
mocker.patch("ansible.modules.unarchive.get_bin_path", side_effect=ValueError)
fake_ansible_module.params = {
"extra_opts": "",
"exclude": "",
"include": "",
}
fake_ansible_module.check_mode = False
t = TgzArchive(
src="",
b_dest="",
file_args="",
module=fake_ansible_module,
)
can_handle, reason = t.can_handle_archive()
assert can_handle is False
assert 'Unable to find required' in reason
assert t.cmd_path is None
assert t.tar_type is None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,286 |
Failed to detect deepin/Uos Linux distro info correctly
|
### Summary
Deepin Linux is a debian based distro. Ansiable is unable to detect deepin/Uos Linux distro info correctly.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/system/distribution.py
### Ansible Version
```console
$ ansible --version
ansible 2.7.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chanth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 2 2021, 05:20:44) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Deepin 20.4 (apricot)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible all -m setup -i"`hostname`," --connection=local -a"filter=ansible_distribution*"
```
### Expected Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "20",
"ansible_distribution_release": "apricot",
"ansible_distribution_version": "20.4"
},
"changed": false
}
```
### Actual Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin 20.4",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/usr/lib/os-release",
"ansible_distribution_file_variety": "ClearLinux",
"ansible_distribution_major_version": "\"20.4\"",
"ansible_distribution_release": "\"20.4\"",
"ansible_distribution_version": "\"20.4\""
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77286
|
https://github.com/ansible/ansible/pull/77275
|
4d984613f5e16e205434cdf7290572e62b40bf62
|
34e60c0a7ab67d5d7dbf133de728aeb68398dd02
| 2022-03-15T15:10:55Z |
python
| 2022-03-17T19:22:23Z |
changelogs/fragments/77275-add-support-for-deepin-distro-info-detection.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,286 |
Failed to detect deepin/Uos Linux distro info correctly
|
### Summary
Deepin Linux is a debian based distro. Ansiable is unable to detect deepin/Uos Linux distro info correctly.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/system/distribution.py
### Ansible Version
```console
$ ansible --version
ansible 2.7.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chanth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 2 2021, 05:20:44) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Deepin 20.4 (apricot)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible all -m setup -i"`hostname`," --connection=local -a"filter=ansible_distribution*"
```
### Expected Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "20",
"ansible_distribution_release": "apricot",
"ansible_distribution_version": "20.4"
},
"changed": false
}
```
### Actual Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin 20.4",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/usr/lib/os-release",
"ansible_distribution_file_variety": "ClearLinux",
"ansible_distribution_major_version": "\"20.4\"",
"ansible_distribution_release": "\"20.4\"",
"ansible_distribution_version": "\"20.4\""
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77286
|
https://github.com/ansible/ansible/pull/77275
|
4d984613f5e16e205434cdf7290572e62b40bf62
|
34e60c0a7ab67d5d7dbf133de728aeb68398dd02
| 2022-03-15T15:10:55Z |
python
| 2022-03-17T19:22:23Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
import ansible.module_utils.compat.typing as t
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/centos-release', 'name': 'CentOS'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/os-release', 'name': 'Amazon'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
if path == '/etc/os-release':
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
distribution_version = version.group(1)
amazon_facts['distribution_version'] = distribution_version
version_data = distribution_version.split(".")
if len(version_data) > 1:
major, minor = version_data
else:
major, minor = version_data[0], 'NA'
amazon_facts['distribution_major_version'] = major
amazon_facts['distribution_minor_version'] = minor
else:
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() == 'flatcar':
if not data:
return False, flatcar_facts
release = re.search("^GROUP=(.*)", data)
if release:
flatcar_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, flatcar_facts
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
def parse_distribution_file_CentOS(self, name, data, path, collected_facts):
centos_facts = {}
if 'CentOS Stream' in data:
centos_facts['distribution_release'] = 'Stream'
return True, centos_facts
if "TencentOS Server" in data:
centos_facts['distribution'] = 'TencentOS'
return True, centos_facts
return False, centos_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler', 'AlmaLinux', 'Rocky', 'TencentOS',
'EuroLinux'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT|RC|PRERELEASE).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family']) # type: t.Set[str]
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,286 |
Failed to detect deepin/Uos Linux distro info correctly
|
### Summary
Deepin Linux is a debian based distro. Ansiable is unable to detect deepin/Uos Linux distro info correctly.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/system/distribution.py
### Ansible Version
```console
$ ansible --version
ansible 2.7.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chanth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 2 2021, 05:20:44) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Deepin 20.4 (apricot)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible all -m setup -i"`hostname`," --connection=local -a"filter=ansible_distribution*"
```
### Expected Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "20",
"ansible_distribution_release": "apricot",
"ansible_distribution_version": "20.4"
},
"changed": false
}
```
### Actual Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin 20.4",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/usr/lib/os-release",
"ansible_distribution_file_variety": "ClearLinux",
"ansible_distribution_major_version": "\"20.4\"",
"ansible_distribution_release": "\"20.4\"",
"ansible_distribution_version": "\"20.4\""
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77286
|
https://github.com/ansible/ansible/pull/77275
|
4d984613f5e16e205434cdf7290572e62b40bf62
|
34e60c0a7ab67d5d7dbf133de728aeb68398dd02
| 2022-03-15T15:10:55Z |
python
| 2022-03-17T19:22:23Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname. Supports most OSs/Distributions including those using C(systemd).
- Windows, HP-UX, and AIX are not currently supported.
notes:
- This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template)
or M(ansible.builtin.replace).
- On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName)
cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName).
options:
name:
description:
- Name of the host.
- If the value is a fully qualified domain name that does not resolve from the given host,
this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout.
type: str
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information.
- Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'.
choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd']
type: str
version_added: '2.9'
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: full
facts:
support: full
platform:
platforms: posix
'''
EXAMPLES = '''
- name: Set a hostname
ansible.builtin.hostname:
name: web01
- name: Set a hostname specifying strategy
ansible.builtin.hostname:
name: web01
use: systemd
'''
import os
import platform
import socket
import traceback
import ansible.module_utils.compat.typing as t
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.utils import get_file_lines, get_file_content
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, text_type
STRATS = {
'alpine': 'Alpine',
'debian': 'Systemd',
'freebsd': 'FreeBSD',
'generic': 'Base',
'macos': 'Darwin',
'macosx': 'Darwin',
'darwin': 'Darwin',
'openbsd': 'OpenBSD',
'openrc': 'OpenRC',
'redhat': 'RedHat',
'sles': 'SLES',
'solaris': 'Solaris',
'systemd': 'Systemd',
}
class BaseStrategy(object):
def __init__(self, module):
self.module = module
self.changed = False
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
return self.get_permanent_hostname()
def set_current_hostname(self, name):
pass
def get_permanent_hostname(self):
raise NotImplementedError
def set_permanent_hostname(self, name):
raise NotImplementedError
class UnimplementedStrategy(BaseStrategy):
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class CommandStrategy(BaseStrategy):
COMMAND = 'hostname'
def __init__(self, module):
super(CommandStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class FileStrategy(BaseStrategy):
FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
return get_file_content(self.FILE, default='', strip=True)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
with open(self.FILE, 'w+') as f:
f.write("%s\n" % name)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class SLESStrategy(FileStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
FILE = '/etc/HOSTNAME'
class RedHatStrategy(BaseStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
for line in get_file_lines(self.NETWORK_FILE):
line = to_native(line).strip()
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
self.module.fail_json(
"Unable to locate HOSTNAME entry in %s" % self.NETWORK_FILE
)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
content = get_file_content(self.NETWORK_FILE, strip=False) or ""
for line in content.splitlines(True):
line = to_native(line)
if line.strip().startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
if not found:
lines.append("HOSTNAME=%s\n" % name)
with open(self.NETWORK_FILE, 'w+') as f:
f.writelines(lines)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class AlpineStrategy(FileStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
FILE = '/etc/hostname'
COMMAND = 'hostname'
def set_current_hostname(self, name):
super(AlpineStrategy, self).set_current_hostname(name)
hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
cmd = [hostname_cmd, '-F', self.FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(BaseStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
COMMAND = "hostnamectl"
def __init__(self, module):
super(SystemdStrategy, self).__init__(module)
self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostnamectl_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostnamectl_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(BaseStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class OpenBSDStrategy(FileStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
FILE = '/etc/myname'
class SolarisStrategy(BaseStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
COMMAND = "hostname"
def __init__(self, module):
super(SolarisStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(BaseStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
FILE = '/etc/rc.conf.d/hostname'
COMMAND = "hostname"
def __init__(self, module):
super(FreeBSDStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
if os.path.isfile(self.FILE):
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
else:
lines = ['hostname="%s"' % name]
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class DarwinStrategy(BaseStrategy):
"""
This is a macOS hostname manipulation strategy class. It uses
/usr/sbin/scutil to set ComputerName, HostName, and LocalHostName.
HostName corresponds to what most platforms consider to be hostname.
It controls the name used on the command line and SSH.
However, macOS also has LocalHostName and ComputerName settings.
LocalHostName controls the Bonjour/ZeroConf name, used by services
like AirDrop. This class implements a method, _scrub_hostname(), that mimics
the transformations macOS makes on hostnames when enterened in the Sharing
preference pane. It replaces spaces with dashes and removes all special
characters.
ComputerName is the name used for user-facing GUI services, like the
System Preferences/Sharing pane and when users connect to the Mac over the network.
"""
def __init__(self, module):
super(DarwinStrategy, self).__init__(module)
self.scutil = self.module.get_bin_path('scutil', True)
self.name_types = ('HostName', 'ComputerName', 'LocalHostName')
self.scrubbed_name = self._scrub_hostname(self.module.params['name'])
def _make_translation(self, replace_chars, replacement_chars, delete_chars):
if PY3:
return str.maketrans(replace_chars, replacement_chars, delete_chars)
if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type):
raise ValueError('replace_chars and replacement_chars must both be strings')
if len(replace_chars) != len(replacement_chars):
raise ValueError('replacement_chars must be the same length as replace_chars')
table = dict(zip((ord(c) for c in replace_chars), replacement_chars))
for char in delete_chars:
table[ord(char)] = None
return table
def _scrub_hostname(self, name):
"""
LocalHostName only accepts valid DNS characters while HostName and ComputerName
accept a much wider range of characters. This function aims to mimic how macOS
translates a friendly name to the LocalHostName.
"""
# Replace all these characters with a single dash
name = to_text(name)
replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ '
delete_chars = u".'"
table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars)
name = name.translate(table)
# Replace multiple dashes with a single dash
while '-' * 2 in name:
name = name.replace('-' * 2, '')
name = name.rstrip('-')
return name
def get_current_hostname(self):
cmd = [self.scutil, '--get', 'HostName']
rc, out, err = self.module.run_command(cmd)
if rc != 0 and 'HostName: not set' not in err:
self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def get_permanent_hostname(self):
cmd = [self.scutil, '--get', 'ComputerName']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
for hostname_type in self.name_types:
cmd = [self.scutil, '--set', hostname_type]
if hostname_type == 'LocalHostName':
cmd.append(to_native(self.scrubbed_name))
else:
cmd.append(to_native(name))
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type))
def set_current_hostname(self, name):
pass
def update_current_hostname(self):
pass
def update_permanent_hostname(self):
name = self.module.params['name']
# Get all the current host name values in the order of self.name_types
all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types)
# Get the expected host name values based on the order in self.name_types
expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types)
# Ensure all three names are updated
if all_names != expected_names:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
strategy_class = UnimplementedStrategy # type: t.Type[BaseStrategy]
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif platform.system() == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
# This is Linux and systemd is active
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy # type: t.Type[BaseStrategy]
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class AnolisOSHostname(Hostname):
platform = 'Linux'
distribution = 'Anolis'
strategy_class = RedHatStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class AlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alinux'
strategy_class = RedHatStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = FileStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = FileStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = FileStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = FileStrategy
class ParrotHostname(Hostname):
platform = 'Linux'
distribution = 'Parrot'
strategy_class = FileStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = FileStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = FileStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = FileStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = FileStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = FileStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = FileStrategy
class DarwinHostname(Hostname):
platform = 'Darwin'
distribution = None
strategy_class = DarwinStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = FileStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = FileStrategy
class EurolinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Eurolinux'
strategy_class = RedHatStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
else:
name_before = permanent_hostname
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,286 |
Failed to detect deepin/Uos Linux distro info correctly
|
### Summary
Deepin Linux is a debian based distro. Ansiable is unable to detect deepin/Uos Linux distro info correctly.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/system/distribution.py
### Ansible Version
```console
$ ansible --version
ansible 2.7.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chanth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 2 2021, 05:20:44) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Deepin 20.4 (apricot)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible all -m setup -i"`hostname`," --connection=local -a"filter=ansible_distribution*"
```
### Expected Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "20",
"ansible_distribution_release": "apricot",
"ansible_distribution_version": "20.4"
},
"changed": false
}
```
### Actual Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin 20.4",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/usr/lib/os-release",
"ansible_distribution_file_variety": "ClearLinux",
"ansible_distribution_major_version": "\"20.4\"",
"ansible_distribution_release": "\"20.4\"",
"ansible_distribution_version": "\"20.4\""
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77286
|
https://github.com/ansible/ansible/pull/77275
|
4d984613f5e16e205434cdf7290572e62b40bf62
|
34e60c0a7ab67d5d7dbf133de728aeb68398dd02
| 2022-03-15T15:10:55Z |
python
| 2022-03-17T19:22:23Z |
test/units/module_utils/facts/system/distribution/fixtures/deepin_20.4.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,286 |
Failed to detect deepin/Uos Linux distro info correctly
|
### Summary
Deepin Linux is a debian based distro. Ansiable is unable to detect deepin/Uos Linux distro info correctly.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/system/distribution.py
### Ansible Version
```console
$ ansible --version
ansible 2.7.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chanth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 2 2021, 05:20:44) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Deepin 20.4 (apricot)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible all -m setup -i"`hostname`," --connection=local -a"filter=ansible_distribution*"
```
### Expected Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "20",
"ansible_distribution_release": "apricot",
"ansible_distribution_version": "20.4"
},
"changed": false
}
```
### Actual Results
```console
chanth-PC | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Deepin 20.4",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/usr/lib/os-release",
"ansible_distribution_file_variety": "ClearLinux",
"ansible_distribution_major_version": "\"20.4\"",
"ansible_distribution_release": "\"20.4\"",
"ansible_distribution_version": "\"20.4\""
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77286
|
https://github.com/ansible/ansible/pull/77275
|
4d984613f5e16e205434cdf7290572e62b40bf62
|
34e60c0a7ab67d5d7dbf133de728aeb68398dd02
| 2022-03-15T15:10:55Z |
python
| 2022-03-17T19:22:23Z |
test/units/module_utils/facts/system/distribution/fixtures/uos_20.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,958 |
NetworkManager set-hostname writes to the log every few seconds
|
### Summary
I have a playbook as follows:
```
- name: Change the hostname
hostname:
name: example.com
```
I have also tried:
```
- name: Change the hostname
hostname:
name: example.com
use: systemd
```
Changing hostname on CentOS Stream 8 using the above playbook causes the target machine to log the following output in none stop way:
```
Feb 6 03:43:41 172-105-183-143 NetworkManager[772]: <info> [1644119021.3649] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:42 172-105-183-143 NetworkManager[772]: <info> [1644119022.3647] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:42 172-105-183-143 NetworkManager[772]: <info> [1644119022.3651] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:43 172-105-183-143 NetworkManager[772]: <info> [1644119023.3646] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:43 172-105-183-143 NetworkManager[772]: <info> [1644119023.3649] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
```
However, when I log into the target machine and run the following command, it stops producing unnecessary logs.
```hostnamectl set-hostname example.com```
OS details of the target machine:
```
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
```
The same issue is also logged here:
https://bugzilla.redhat.com/show_bug.cgi?id=1894137
However, I am not getting the issue when I run the command 'hostnamectl' directly in the box but only when I update the hostname using Ansible playbook.
Ansible version I am using:
```
➜ ~ ansible --version
ansible [core 2.12.2]
config file = None
configured module search path = ['/Users/gehendra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/gehendra/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
```
Your help would be highly appreciated.
Thank you.
### Issue Type
Bug Report
### Component Name
ansible
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = None
configured module search path = ['/Users/gehendra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/gehendra/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(END)
Nothing was produced at all.
```
### OS / Environment
[ga@example ~]$ cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Change the hostname
hostname:
name: example.com
or
- name: Change the hostname
hostname:
name: example.com
use: systemd
```
### Expected Results
I do not want the garbage/error logs to be written every second.
### Actual Results
```console
No errors are displayed when running the playbook on the control machine.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76958
|
https://github.com/ansible/ansible/pull/77243
|
1100289a45e5b5444bc5af59052fc2d63452eff6
|
c1a34d5a6337002edf74dbd63d4bc68ce8e085e8
| 2022-02-06T04:08:14Z |
python
| 2022-03-21T14:32:41Z |
changelogs/fragments/76958-hostname-systemd-nm.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,958 |
NetworkManager set-hostname writes to the log every few seconds
|
### Summary
I have a playbook as follows:
```
- name: Change the hostname
hostname:
name: example.com
```
I have also tried:
```
- name: Change the hostname
hostname:
name: example.com
use: systemd
```
Changing hostname on CentOS Stream 8 using the above playbook causes the target machine to log the following output in none stop way:
```
Feb 6 03:43:41 172-105-183-143 NetworkManager[772]: <info> [1644119021.3649] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:42 172-105-183-143 NetworkManager[772]: <info> [1644119022.3647] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:42 172-105-183-143 NetworkManager[772]: <info> [1644119022.3651] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:43 172-105-183-143 NetworkManager[772]: <info> [1644119023.3646] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:43 172-105-183-143 NetworkManager[772]: <info> [1644119023.3649] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
```
However, when I log into the target machine and run the following command, it stops producing unnecessary logs.
```hostnamectl set-hostname example.com```
OS details of the target machine:
```
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
```
The same issue is also logged here:
https://bugzilla.redhat.com/show_bug.cgi?id=1894137
However, I am not getting the issue when I run the command 'hostnamectl' directly in the box but only when I update the hostname using Ansible playbook.
Ansible version I am using:
```
➜ ~ ansible --version
ansible [core 2.12.2]
config file = None
configured module search path = ['/Users/gehendra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/gehendra/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
```
Your help would be highly appreciated.
Thank you.
### Issue Type
Bug Report
### Component Name
ansible
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = None
configured module search path = ['/Users/gehendra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/gehendra/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(END)
Nothing was produced at all.
```
### OS / Environment
[ga@example ~]$ cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Change the hostname
hostname:
name: example.com
or
- name: Change the hostname
hostname:
name: example.com
use: systemd
```
### Expected Results
I do not want the garbage/error logs to be written every second.
### Actual Results
```console
No errors are displayed when running the playbook on the control machine.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76958
|
https://github.com/ansible/ansible/pull/77243
|
1100289a45e5b5444bc5af59052fc2d63452eff6
|
c1a34d5a6337002edf74dbd63d4bc68ce8e085e8
| 2022-02-06T04:08:14Z |
python
| 2022-03-21T14:32:41Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname. Supports most OSs/Distributions including those using C(systemd).
- Windows, HP-UX, and AIX are not currently supported.
notes:
- This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template)
or M(ansible.builtin.replace).
- On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName)
cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName).
options:
name:
description:
- Name of the host.
- If the value is a fully qualified domain name that does not resolve from the given host,
this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout.
type: str
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information.
- Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'.
choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd']
type: str
version_added: '2.9'
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: full
facts:
support: full
platform:
platforms: posix
'''
EXAMPLES = '''
- name: Set a hostname
ansible.builtin.hostname:
name: web01
- name: Set a hostname specifying strategy
ansible.builtin.hostname:
name: web01
use: systemd
'''
import os
import platform
import socket
import traceback
import ansible.module_utils.compat.typing as t
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.utils import get_file_lines, get_file_content
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, text_type
STRATS = {
'alpine': 'Alpine',
'debian': 'Systemd',
'freebsd': 'FreeBSD',
'generic': 'Base',
'macos': 'Darwin',
'macosx': 'Darwin',
'darwin': 'Darwin',
'openbsd': 'OpenBSD',
'openrc': 'OpenRC',
'redhat': 'RedHat',
'sles': 'SLES',
'solaris': 'Solaris',
'systemd': 'Systemd',
}
class BaseStrategy(object):
def __init__(self, module):
self.module = module
self.changed = False
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
return self.get_permanent_hostname()
def set_current_hostname(self, name):
pass
def get_permanent_hostname(self):
raise NotImplementedError
def set_permanent_hostname(self, name):
raise NotImplementedError
class UnimplementedStrategy(BaseStrategy):
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class CommandStrategy(BaseStrategy):
COMMAND = 'hostname'
def __init__(self, module):
super(CommandStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class FileStrategy(BaseStrategy):
FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
return get_file_content(self.FILE, default='', strip=True)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
with open(self.FILE, 'w+') as f:
f.write("%s\n" % name)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class SLESStrategy(FileStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
FILE = '/etc/HOSTNAME'
class RedHatStrategy(BaseStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
for line in get_file_lines(self.NETWORK_FILE):
line = to_native(line).strip()
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
self.module.fail_json(
"Unable to locate HOSTNAME entry in %s" % self.NETWORK_FILE
)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
content = get_file_content(self.NETWORK_FILE, strip=False) or ""
for line in content.splitlines(True):
line = to_native(line)
if line.strip().startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
if not found:
lines.append("HOSTNAME=%s\n" % name)
with open(self.NETWORK_FILE, 'w+') as f:
f.writelines(lines)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class AlpineStrategy(FileStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
FILE = '/etc/hostname'
COMMAND = 'hostname'
def set_current_hostname(self, name):
super(AlpineStrategy, self).set_current_hostname(name)
hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
cmd = [hostname_cmd, '-F', self.FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(BaseStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
COMMAND = "hostnamectl"
def __init__(self, module):
super(SystemdStrategy, self).__init__(module)
self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostnamectl_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostnamectl_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostnamectl_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(BaseStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class OpenBSDStrategy(FileStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
FILE = '/etc/myname'
class SolarisStrategy(BaseStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
COMMAND = "hostname"
def __init__(self, module):
super(SolarisStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(BaseStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
FILE = '/etc/rc.conf.d/hostname'
COMMAND = "hostname"
def __init__(self, module):
super(FreeBSDStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
if os.path.isfile(self.FILE):
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
else:
lines = ['hostname="%s"' % name]
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class DarwinStrategy(BaseStrategy):
"""
This is a macOS hostname manipulation strategy class. It uses
/usr/sbin/scutil to set ComputerName, HostName, and LocalHostName.
HostName corresponds to what most platforms consider to be hostname.
It controls the name used on the command line and SSH.
However, macOS also has LocalHostName and ComputerName settings.
LocalHostName controls the Bonjour/ZeroConf name, used by services
like AirDrop. This class implements a method, _scrub_hostname(), that mimics
the transformations macOS makes on hostnames when enterened in the Sharing
preference pane. It replaces spaces with dashes and removes all special
characters.
ComputerName is the name used for user-facing GUI services, like the
System Preferences/Sharing pane and when users connect to the Mac over the network.
"""
def __init__(self, module):
super(DarwinStrategy, self).__init__(module)
self.scutil = self.module.get_bin_path('scutil', True)
self.name_types = ('HostName', 'ComputerName', 'LocalHostName')
self.scrubbed_name = self._scrub_hostname(self.module.params['name'])
def _make_translation(self, replace_chars, replacement_chars, delete_chars):
if PY3:
return str.maketrans(replace_chars, replacement_chars, delete_chars)
if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type):
raise ValueError('replace_chars and replacement_chars must both be strings')
if len(replace_chars) != len(replacement_chars):
raise ValueError('replacement_chars must be the same length as replace_chars')
table = dict(zip((ord(c) for c in replace_chars), replacement_chars))
for char in delete_chars:
table[ord(char)] = None
return table
def _scrub_hostname(self, name):
"""
LocalHostName only accepts valid DNS characters while HostName and ComputerName
accept a much wider range of characters. This function aims to mimic how macOS
translates a friendly name to the LocalHostName.
"""
# Replace all these characters with a single dash
name = to_text(name)
replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ '
delete_chars = u".'"
table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars)
name = name.translate(table)
# Replace multiple dashes with a single dash
while '-' * 2 in name:
name = name.replace('-' * 2, '')
name = name.rstrip('-')
return name
def get_current_hostname(self):
cmd = [self.scutil, '--get', 'HostName']
rc, out, err = self.module.run_command(cmd)
if rc != 0 and 'HostName: not set' not in err:
self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def get_permanent_hostname(self):
cmd = [self.scutil, '--get', 'ComputerName']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
for hostname_type in self.name_types:
cmd = [self.scutil, '--set', hostname_type]
if hostname_type == 'LocalHostName':
cmd.append(to_native(self.scrubbed_name))
else:
cmd.append(to_native(name))
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type))
def set_current_hostname(self, name):
pass
def update_current_hostname(self):
pass
def update_permanent_hostname(self):
name = self.module.params['name']
# Get all the current host name values in the order of self.name_types
all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types)
# Get the expected host name values based on the order in self.name_types
expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types)
# Ensure all three names are updated
if all_names != expected_names:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
strategy_class = UnimplementedStrategy # type: t.Type[BaseStrategy]
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif platform.system() == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
# This is Linux and systemd is active
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy # type: t.Type[BaseStrategy]
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class AnolisOSHostname(Hostname):
platform = 'Linux'
distribution = 'Anolis'
strategy_class = RedHatStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class AlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alinux'
strategy_class = RedHatStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = FileStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = FileStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = FileStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = FileStrategy
class ParrotHostname(Hostname):
platform = 'Linux'
distribution = 'Parrot'
strategy_class = FileStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = FileStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = FileStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = FileStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = FileStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = FileStrategy
class UosHostname(Hostname):
platform = 'Linux'
distribution = 'Uos'
strategy_class = FileStrategy
class DeepinHostname(Hostname):
platform = 'Linux'
distribution = 'Deepin'
strategy_class = FileStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = FileStrategy
class DarwinHostname(Hostname):
platform = 'Darwin'
distribution = None
strategy_class = DarwinStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = FileStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = FileStrategy
class EurolinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Eurolinux'
strategy_class = RedHatStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
else:
name_before = permanent_hostname
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,958 |
NetworkManager set-hostname writes to the log every few seconds
|
### Summary
I have a playbook as follows:
```
- name: Change the hostname
hostname:
name: example.com
```
I have also tried:
```
- name: Change the hostname
hostname:
name: example.com
use: systemd
```
Changing hostname on CentOS Stream 8 using the above playbook causes the target machine to log the following output in none stop way:
```
Feb 6 03:43:41 172-105-183-143 NetworkManager[772]: <info> [1644119021.3649] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:42 172-105-183-143 NetworkManager[772]: <info> [1644119022.3647] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:42 172-105-183-143 NetworkManager[772]: <info> [1644119022.3651] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:43 172-105-183-143 NetworkManager[772]: <info> [1644119023.3646] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
Feb 6 03:43:43 172-105-183-143 NetworkManager[772]: <info> [1644119023.3649] policy: set-hostname: current hostname was changed outside NetworkManager: 'example.com'
```
However, when I log into the target machine and run the following command, it stops producing unnecessary logs.
```hostnamectl set-hostname example.com```
OS details of the target machine:
```
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
```
The same issue is also logged here:
https://bugzilla.redhat.com/show_bug.cgi?id=1894137
However, I am not getting the issue when I run the command 'hostnamectl' directly in the box but only when I update the hostname using Ansible playbook.
Ansible version I am using:
```
➜ ~ ansible --version
ansible [core 2.12.2]
config file = None
configured module search path = ['/Users/gehendra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/gehendra/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
```
Your help would be highly appreciated.
Thank you.
### Issue Type
Bug Report
### Component Name
ansible
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = None
configured module search path = ['/Users/gehendra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/gehendra/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(END)
Nothing was produced at all.
```
### OS / Environment
[ga@example ~]$ cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Change the hostname
hostname:
name: example.com
or
- name: Change the hostname
hostname:
name: example.com
use: systemd
```
### Expected Results
I do not want the garbage/error logs to be written every second.
### Actual Results
```console
No errors are displayed when running the playbook on the control machine.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76958
|
https://github.com/ansible/ansible/pull/77243
|
1100289a45e5b5444bc5af59052fc2d63452eff6
|
c1a34d5a6337002edf74dbd63d4bc68ce8e085e8
| 2022-02-06T04:08:14Z |
python
| 2022-03-21T14:32:41Z |
test/integration/targets/hostname/tasks/test_normal.yml
|
- name: Run hostname module for real now
become: 'yes'
hostname:
name: crocodile.ansible.test.doesthiswork.net.example.com
register: hn2
- name: Get hostname
command: hostname
register: current_after_hn2
- name: Run hostname again to ensure it does not change
become: 'yes'
hostname:
name: crocodile.ansible.test.doesthiswork.net.example.com
register: hn3
- name: Get hostname
command: hostname
register: current_after_hn3
- assert:
that:
- hn2 is changed
- hn3 is not changed
- current_after_hn2.stdout == 'crocodile.ansible.test.doesthiswork.net.example.com'
- current_after_hn2.stdout == current_after_hn2.stdout
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,658 |
Update import test for Python 3.10
|
##### SUMMARY
Implement `find_spec` and `exec_module` in the `RestrictedModuleLoader` for Python 3.10 and later. This depends on updating the collection loader first.
See also: https://github.com/ansible/ansible/issues/72952
Depends on: https://github.com/ansible/ansible/issues/74660
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
test/lib/ansible_test/_data/sanity/import/importer.py
|
https://github.com/ansible/ansible/issues/74658
|
https://github.com/ansible/ansible/pull/76427
|
f96a661adaf627764fc3527cab1ca4d829d69ea1
|
2769f5621b1d91a6eceb14d671ba9f0d07db6825
| 2021-05-11T16:13:20Z |
python
| 2022-03-23T20:55:37Z |
test/lib/ansible_test/_internal/commands/sanity/mypy.py
|
"""Sanity test which executes mypy."""
from __future__ import annotations
import dataclasses
import os
import re
import typing as t
from . import (
SanityMultipleVersion,
SanityMessage,
SanityFailure,
SanitySuccess,
SanitySkipped,
SanityTargets,
create_sanity_virtualenv,
)
from ...constants import (
CONTROLLER_PYTHON_VERSIONS,
REMOTE_ONLY_PYTHON_VERSIONS,
)
from ...test import (
TestResult,
)
from ...target import (
TestTarget,
)
from ...util import (
SubprocessError,
display,
parse_to_list_of_dict,
ANSIBLE_TEST_CONTROLLER_ROOT,
ApplicationError,
is_subdir,
)
from ...util_common import (
intercept_python,
)
from ...ansible_util import (
ansible_environment,
)
from ...config import (
SanityConfig,
)
from ...host_configs import (
PythonConfig,
VirtualPythonConfig,
)
class MypyTest(SanityMultipleVersion):
"""Sanity test which executes mypy."""
ansible_only = True
def filter_targets(self, targets): # type: (t.List[TestTarget]) -> t.List[TestTarget]
"""Return the given list of test targets, filtered to include only those relevant for the test."""
return [target for target in targets if os.path.splitext(target.path)[1] == '.py' and (
target.path.startswith('lib/ansible/') or target.path.startswith('test/lib/ansible_test/_internal/'))]
@property
def error_code(self): # type: () -> t.Optional[str]
"""Error code for ansible-test matching the format used by the underlying test program, or None if the program does not use error codes."""
return 'ansible-test'
@property
def needs_pypi(self): # type: () -> bool
"""True if the test requires PyPI, otherwise False."""
return True
def test(self, args, targets, python): # type: (SanityConfig, SanityTargets, PythonConfig) -> TestResult
settings = self.load_processor(args, python.version)
paths = [target.path for target in targets.include]
virtualenv_python = create_sanity_virtualenv(args, args.controller_python, self.name)
if args.prime_venvs:
return SanitySkipped(self.name, python_version=python.version)
if not virtualenv_python:
display.warning(f'Skipping sanity test "{self.name}" due to missing virtual environment support on Python {args.controller_python.version}.')
return SanitySkipped(self.name, python.version)
contexts = (
MyPyContext('ansible-test', ['test/lib/ansible_test/_internal/'], CONTROLLER_PYTHON_VERSIONS),
MyPyContext('ansible-core', ['lib/ansible/'], CONTROLLER_PYTHON_VERSIONS),
MyPyContext('modules', ['lib/ansible/modules/', 'lib/ansible/module_utils/'], REMOTE_ONLY_PYTHON_VERSIONS),
)
unfiltered_messages = [] # type: t.List[SanityMessage]
for context in contexts:
if python.version not in context.python_versions:
continue
unfiltered_messages.extend(self.test_context(args, virtualenv_python, python, context, paths))
notices = []
messages = []
for message in unfiltered_messages:
if message.level != 'error':
notices.append(message)
continue
match = re.search(r'^(?P<message>.*) {2}\[(?P<code>.*)]$', message.message)
messages.append(SanityMessage(
message=match.group('message'),
path=message.path,
line=message.line,
column=message.column,
level=message.level,
code=match.group('code'),
))
for notice in notices:
display.info(notice.format(), verbosity=3)
# The following error codes from mypy indicate that results are incomplete.
# That prevents the test from completing successfully, just as if mypy were to traceback or generate unexpected output.
fatal_error_codes = {
'import',
'syntax',
}
fatal_errors = [message for message in messages if message.code in fatal_error_codes]
if fatal_errors:
error_message = '\n'.join(error.format() for error in fatal_errors)
raise ApplicationError(f'Encountered {len(fatal_errors)} fatal errors reported by mypy:\n{error_message}')
paths_set = set(paths)
# Only report messages for paths that were specified as targets.
# Imports in our code are followed by mypy in order to perform its analysis, which is important for accurate results.
# However, it will also report issues on those files, which is not the desired behavior.
messages = [message for message in messages if message.path in paths_set]
results = settings.process_errors(messages, paths)
if results:
return SanityFailure(self.name, messages=results, python_version=python.version)
return SanitySuccess(self.name, python_version=python.version)
@staticmethod
def test_context(
args, # type: SanityConfig
virtualenv_python, # type: VirtualPythonConfig
python, # type: PythonConfig
context, # type: MyPyContext
paths, # type: t.List[str]
): # type: (...) -> t.List[SanityMessage]
"""Run mypy tests for the specified context."""
context_paths = [path for path in paths if any(is_subdir(path, match_path) for match_path in context.paths)]
if not context_paths:
return []
config_path = os.path.join(ANSIBLE_TEST_CONTROLLER_ROOT, 'sanity', 'mypy', f'{context.name}.ini')
display.info(f'Checking context "{context.name}"', verbosity=1)
env = ansible_environment(args, color=False)
# The --no-site-packages option should not be used, as it will prevent loading of type stubs from the sanity test virtual environment.
# Enabling the --warn-unused-configs option would help keep the config files clean.
# However, the option can only be used when all files in tested contexts are evaluated.
# Unfortunately sanity tests have no way of making that determination currently.
# The option is also incompatible with incremental mode and caching.
cmd = [
# Below are arguments common to all contexts.
# They are kept here to avoid repetition in each config file.
virtualenv_python.path,
'-m', 'mypy',
'--show-column-numbers',
'--show-error-codes',
'--no-error-summary',
# This is a fairly common pattern in our code, so we'll allow it.
'--allow-redefinition',
# Since we specify the path(s) to test, it's important that mypy is configured to use the default behavior of following imports.
'--follow-imports', 'normal',
# Incremental results and caching do not provide significant performance benefits.
# It also prevents the use of the --warn-unused-configs option.
'--no-incremental',
'--cache-dir', '/dev/null',
# The platform is specified here so that results are consistent regardless of what platform the tests are run from.
# In the future, if testing of other platforms is desired, the platform should become part of the test specification, just like the Python version.
'--platform', 'linux',
# Despite what the documentation [1] states, the --python-version option does not cause mypy to search for a corresponding Python executable.
# It will instead use the Python executable that is used to run mypy itself.
# The --python-executable option can be used to specify the Python executable, with the default being the executable used to run mypy.
# As a precaution, that option is used in case the behavior of mypy is updated in the future to match the documentation.
# That should help guarantee that the Python executable providing type hints is the one used to run mypy.
# [1] https://mypy.readthedocs.io/en/stable/command_line.html#cmdoption-mypy-python-version
'--python-executable', virtualenv_python.path,
'--python-version', python.version,
# Below are context specific arguments.
# They are primarily useful for listing individual 'ignore_missing_imports' entries instead of using a global ignore.
'--config-file', config_path,
]
cmd.extend(context_paths)
try:
stdout, stderr = intercept_python(args, virtualenv_python, cmd, env, capture=True)
if stdout or stderr:
raise SubprocessError(cmd, stdout=stdout, stderr=stderr)
except SubprocessError as ex:
if ex.status != 1 or ex.stderr or not ex.stdout:
raise
stdout = ex.stdout
pattern = r'^(?P<path>[^:]*):(?P<line>[0-9]+):((?P<column>[0-9]+):)? (?P<level>[^:]+): (?P<message>.*)$'
parsed = parse_to_list_of_dict(pattern, stdout)
messages = [SanityMessage(
level=r['level'],
message=r['message'],
path=r['path'],
line=int(r['line']),
column=int(r.get('column') or '0'),
) for r in parsed]
return messages
@dataclasses.dataclass(frozen=True)
class MyPyContext:
"""Context details for a single run of mypy."""
name: str
paths: t.List[str]
python_versions: t.Tuple[str, ...]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,658 |
Update import test for Python 3.10
|
##### SUMMARY
Implement `find_spec` and `exec_module` in the `RestrictedModuleLoader` for Python 3.10 and later. This depends on updating the collection loader first.
See also: https://github.com/ansible/ansible/issues/72952
Depends on: https://github.com/ansible/ansible/issues/74660
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
test/lib/ansible_test/_data/sanity/import/importer.py
|
https://github.com/ansible/ansible/issues/74658
|
https://github.com/ansible/ansible/pull/76427
|
f96a661adaf627764fc3527cab1ca4d829d69ea1
|
2769f5621b1d91a6eceb14d671ba9f0d07db6825
| 2021-05-11T16:13:20Z |
python
| 2022-03-23T20:55:37Z |
test/lib/ansible_test/_util/controller/sanity/mypy/ansible-test.ini
|
# IMPORTANT
# Set "ignore_missing_imports" per package below, rather than globally.
# That will help identify missing type stubs that should be added to the sanity test environment.
[mypy]
# There are ~350 errors reported in ansible-test when strict optional checking is enabled.
# Until the number of occurrences are greatly reduced, it's better to disable strict checking.
strict_optional = False
# There are ~25 errors reported in ansible-test under the 'misc' code.
# The majority of those errors are "Only concrete class can be given", which is due to a limitation of mypy.
# See: https://github.com/python/mypy/issues/5374
disable_error_code = misc
[mypy-argcomplete]
ignore_missing_imports = True
[mypy-coverage]
ignore_missing_imports = True
[mypy-ansible_release]
ignore_missing_imports = True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,658 |
Update import test for Python 3.10
|
##### SUMMARY
Implement `find_spec` and `exec_module` in the `RestrictedModuleLoader` for Python 3.10 and later. This depends on updating the collection loader first.
See also: https://github.com/ansible/ansible/issues/72952
Depends on: https://github.com/ansible/ansible/issues/74660
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
test/lib/ansible_test/_data/sanity/import/importer.py
|
https://github.com/ansible/ansible/issues/74658
|
https://github.com/ansible/ansible/pull/76427
|
f96a661adaf627764fc3527cab1ca4d829d69ea1
|
2769f5621b1d91a6eceb14d671ba9f0d07db6825
| 2021-05-11T16:13:20Z |
python
| 2022-03-23T20:55:37Z |
test/lib/ansible_test/_util/target/sanity/import/importer.py
|
"""Import the given python module(s) and report error(s) encountered."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
def main():
"""
Main program function used to isolate globals from imported code.
Changes to globals in imported modules on Python 2.x will overwrite our own globals.
"""
import os
import sys
import types
# preload an empty ansible._vendor module to prevent use of any embedded modules during the import test
vendor_module_name = 'ansible._vendor'
vendor_module = types.ModuleType(vendor_module_name)
vendor_module.__file__ = os.path.join(os.path.sep.join(os.path.abspath(__file__).split(os.path.sep)[:-8]), 'lib/ansible/_vendor/__init__.py')
vendor_module.__path__ = []
vendor_module.__package__ = vendor_module_name
sys.modules[vendor_module_name] = vendor_module
import ansible
import contextlib
import datetime
import json
import re
import runpy
import subprocess
import traceback
import warnings
ansible_path = os.path.dirname(os.path.dirname(ansible.__file__))
temp_path = os.environ['SANITY_TEMP_PATH'] + os.path.sep
external_python = os.environ.get('SANITY_EXTERNAL_PYTHON')
yaml_to_json_path = os.environ.get('SANITY_YAML_TO_JSON')
collection_full_name = os.environ.get('SANITY_COLLECTION_FULL_NAME')
collection_root = os.environ.get('ANSIBLE_COLLECTIONS_PATH')
import_type = os.environ.get('SANITY_IMPORTER_TYPE')
try:
# noinspection PyCompatibility
from importlib import import_module
except ImportError:
def import_module(name):
__import__(name)
return sys.modules[name]
try:
# noinspection PyCompatibility
from StringIO import StringIO
except ImportError:
from io import StringIO
if collection_full_name:
# allow importing code from collections when testing a collection
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native, text_type
# noinspection PyProtectedMember
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
from ansible.utils.collection_loader import _collection_finder
yaml_to_dict_cache = {}
# unique ISO date marker matching the one present in yaml_to_json.py
iso_date_marker = 'isodate:f23983df-f3df-453c-9904-bcd08af468cc:'
iso_date_re = re.compile('^%s([0-9]{4})-([0-9]{2})-([0-9]{2})$' % iso_date_marker)
def parse_value(value):
"""Custom value parser for JSON deserialization that recognizes our internal ISO date format."""
if isinstance(value, text_type):
match = iso_date_re.search(value)
if match:
value = datetime.date(int(match.group(1)), int(match.group(2)), int(match.group(3)))
return value
def object_hook(data):
"""Object hook for custom ISO date deserialization from JSON."""
return dict((key, parse_value(value)) for key, value in data.items())
def yaml_to_dict(yaml, content_id):
"""
Return a Python dict version of the provided YAML.
Conversion is done in a subprocess since the current Python interpreter does not have access to PyYAML.
"""
if content_id in yaml_to_dict_cache:
return yaml_to_dict_cache[content_id]
try:
cmd = [external_python, yaml_to_json_path]
proc = subprocess.Popen([to_bytes(c) for c in cmd], # pylint: disable=consider-using-with
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout_bytes, stderr_bytes = proc.communicate(to_bytes(yaml))
if proc.returncode != 0:
raise Exception('command %s failed with return code %d: %s' % ([to_native(c) for c in cmd], proc.returncode, to_native(stderr_bytes)))
data = yaml_to_dict_cache[content_id] = json.loads(to_text(stdout_bytes), object_hook=object_hook)
return data
except Exception as ex:
raise Exception('internal importer error - failed to parse yaml: %s' % to_native(ex))
_collection_finder._meta_yml_to_dict = yaml_to_dict # pylint: disable=protected-access
collection_loader = _AnsibleCollectionFinder(paths=[collection_root])
# noinspection PyProtectedMember
collection_loader._install() # pylint: disable=protected-access
else:
# do not support collection loading when not testing a collection
collection_loader = None
if collection_loader and import_type == 'plugin':
# do not unload ansible code for collection plugin (not module) tests
# doing so could result in the collection loader being initialized multiple times
pass
else:
# remove all modules under the ansible package, except the preloaded vendor module
list(map(sys.modules.pop, [m for m in sys.modules if m.partition('.')[0] == ansible.__name__ and m != vendor_module_name]))
if import_type == 'module':
# pre-load an empty ansible package to prevent unwanted code in __init__.py from loading
# this more accurately reflects the environment that AnsiballZ runs modules under
# it also avoids issues with imports in the ansible package that are not allowed
ansible_module = types.ModuleType(ansible.__name__)
ansible_module.__file__ = ansible.__file__
ansible_module.__path__ = ansible.__path__
ansible_module.__package__ = ansible.__package__
sys.modules[ansible.__name__] = ansible_module
class ImporterAnsibleModuleException(Exception):
"""Exception thrown during initialization of ImporterAnsibleModule."""
class ImporterAnsibleModule:
"""Replacement for AnsibleModule to support import testing."""
def __init__(self, *args, **kwargs):
raise ImporterAnsibleModuleException()
class RestrictedModuleLoader:
"""Python module loader that restricts inappropriate imports."""
def __init__(self, path, name, restrict_to_module_paths):
self.path = path
self.name = name
self.loaded_modules = set()
self.restrict_to_module_paths = restrict_to_module_paths
def find_module(self, fullname, path=None):
"""Return self if the given fullname is restricted, otherwise return None.
:param fullname: str
:param path: str
:return: RestrictedModuleLoader | None
"""
if fullname in self.loaded_modules:
return None # ignore modules that are already being loaded
if is_name_in_namepace(fullname, ['ansible']):
if not self.restrict_to_module_paths:
return None # for non-modules, everything in the ansible namespace is allowed
if fullname in ('ansible.module_utils.basic',):
return self # intercept loading so we can modify the result
if is_name_in_namepace(fullname, ['ansible.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if any(os.path.exists(candidate_path) for candidate_path in convert_ansible_name_to_absolute_paths(fullname)):
return self # restrict access to ansible files that exist
return None # ansible file does not exist, do not restrict access
if is_name_in_namepace(fullname, ['ansible_collections']):
if not collection_loader:
return self # restrict access to collections when we are not testing a collection
if not self.restrict_to_module_paths:
return None # for non-modules, everything in the ansible namespace is allowed
if is_name_in_namepace(fullname, ['ansible_collections...plugins.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if collection_loader.find_module(fullname, path):
return self # restrict access to collection files that exist
return None # collection file does not exist, do not restrict access
# not a namespace we care about
return None
def load_module(self, fullname):
"""Raise an ImportError.
:type fullname: str
"""
if fullname == 'ansible.module_utils.basic':
module = self.__load_module(fullname)
# stop Ansible module execution during AnsibleModule instantiation
module.AnsibleModule = ImporterAnsibleModule
# no-op for _load_params since it may be called before instantiating AnsibleModule
module._load_params = lambda *args, **kwargs: {} # pylint: disable=protected-access
return module
raise ImportError('import of "%s" is not allowed in this context' % fullname)
def __load_module(self, fullname):
"""Load the requested module while avoiding infinite recursion.
:type fullname: str
:rtype: module
"""
self.loaded_modules.add(fullname)
return import_module(fullname)
def run(restrict_to_module_paths):
"""Main program function."""
base_dir = os.getcwd()
messages = set()
for path in sys.argv[1:] or sys.stdin.read().splitlines():
name = convert_relative_path_to_name(path)
test_python_module(path, name, base_dir, messages, restrict_to_module_paths)
if messages:
sys.exit(10)
def test_python_module(path, name, base_dir, messages, restrict_to_module_paths):
"""Test the given python module by importing it.
:type path: str
:type name: str
:type base_dir: str
:type messages: set[str]
:type restrict_to_module_paths: bool
"""
if name in sys.modules:
return # cannot be tested because it has already been loaded
is_ansible_module = (path.startswith('lib/ansible/modules/') or path.startswith('plugins/modules/')) and os.path.basename(path) != '__init__.py'
run_main = is_ansible_module
if path == 'lib/ansible/modules/async_wrapper.py':
# async_wrapper is a non-standard Ansible module (does not use AnsibleModule) so we cannot test the main function
run_main = False
capture_normal = Capture()
capture_main = Capture()
run_module_ok = False
try:
with monitor_sys_modules(path, messages):
with restrict_imports(path, name, messages, restrict_to_module_paths):
with capture_output(capture_normal):
import_module(name)
if run_main:
run_module_ok = is_ansible_module
with monitor_sys_modules(path, messages):
with restrict_imports(path, name, messages, restrict_to_module_paths):
with capture_output(capture_main):
runpy.run_module(name, run_name='__main__', alter_sys=True)
except ImporterAnsibleModuleException:
# module instantiated AnsibleModule without raising an exception
if not run_module_ok:
if is_ansible_module:
report_message(path, 0, 0, 'module-guard', "AnsibleModule instantiation not guarded by `if __name__ == '__main__'`", messages)
else:
report_message(path, 0, 0, 'non-module', "AnsibleModule instantiated by import of non-module", messages)
except BaseException as ex: # pylint: disable=locally-disabled, broad-except
# intentionally catch all exceptions, including calls to sys.exit
exc_type, _exc, exc_tb = sys.exc_info()
message = str(ex)
results = list(reversed(traceback.extract_tb(exc_tb)))
line = 0
offset = 0
full_path = os.path.join(base_dir, path)
base_path = base_dir + os.path.sep
source = None
# avoid line wraps in messages
message = re.sub(r'\n *', ': ', message)
for result in results:
if result[0] == full_path:
# save the line number for the file under test
line = result[1] or 0
if not source and result[0].startswith(base_path) and not result[0].startswith(temp_path):
# save the first path and line number in the traceback which is in our source tree
source = (os.path.relpath(result[0], base_path), result[1] or 0, 0)
if isinstance(ex, SyntaxError):
# SyntaxError has better information than the traceback
if ex.filename == full_path: # pylint: disable=locally-disabled, no-member
# syntax error was reported in the file under test
line = ex.lineno or 0 # pylint: disable=locally-disabled, no-member
offset = ex.offset or 0 # pylint: disable=locally-disabled, no-member
elif ex.filename.startswith(base_path) and not ex.filename.startswith(temp_path): # pylint: disable=locally-disabled, no-member
# syntax error was reported in our source tree
source = (os.path.relpath(ex.filename, base_path), ex.lineno or 0, ex.offset or 0) # pylint: disable=locally-disabled, no-member
# remove the filename and line number from the message
# either it was extracted above, or it's not really useful information
message = re.sub(r' \(.*?, line [0-9]+\)$', '', message)
if source and source[0] != path:
message += ' (at %s:%d:%d)' % (source[0], source[1], source[2])
report_message(path, line, offset, 'traceback', '%s: %s' % (exc_type.__name__, message), messages)
finally:
capture_report(path, capture_normal, messages)
capture_report(path, capture_main, messages)
def is_name_in_namepace(name, namespaces):
"""Returns True if the given name is one of the given namespaces, otherwise returns False."""
name_parts = name.split('.')
for namespace in namespaces:
namespace_parts = namespace.split('.')
length = min(len(name_parts), len(namespace_parts))
truncated_name = name_parts[0:length]
truncated_namespace = namespace_parts[0:length]
# empty parts in the namespace are treated as wildcards
# to simplify the comparison, use those empty parts to indicate the positions in the name to be empty as well
for idx, part in enumerate(truncated_namespace):
if not part:
truncated_name[idx] = part
# example: name=ansible, allowed_name=ansible.module_utils
# example: name=ansible.module_utils.system.ping, allowed_name=ansible.module_utils
if truncated_name == truncated_namespace:
return True
return False
def check_sys_modules(path, before, messages):
"""Check for unwanted changes to sys.modules.
:type path: str
:type before: dict[str, module]
:type messages: set[str]
"""
after = sys.modules
removed = set(before.keys()) - set(after.keys())
changed = set(key for key, value in before.items() if key in after and value != after[key])
# additions are checked by our custom PEP 302 loader, so we don't need to check them again here
for module in sorted(removed):
report_message(path, 0, 0, 'unload', 'unloading of "%s" in sys.modules is not supported' % module, messages)
for module in sorted(changed):
report_message(path, 0, 0, 'reload', 'reloading of "%s" in sys.modules is not supported' % module, messages)
def convert_ansible_name_to_absolute_paths(name):
"""Calculate the module path from the given name.
:type name: str
:rtype: list[str]
"""
return [
os.path.join(ansible_path, name.replace('.', os.path.sep)),
os.path.join(ansible_path, name.replace('.', os.path.sep)) + '.py',
]
def convert_relative_path_to_name(path):
"""Calculate the module name from the given path.
:type path: str
:rtype: str
"""
if path.endswith('/__init__.py'):
clean_path = os.path.dirname(path)
else:
clean_path = path
clean_path = os.path.splitext(clean_path)[0]
name = clean_path.replace(os.path.sep, '.')
if collection_loader:
# when testing collections the relative paths (and names) being tested are within the collection under test
name = 'ansible_collections.%s.%s' % (collection_full_name, name)
else:
# when testing ansible all files being imported reside under the lib directory
name = name[len('lib/'):]
return name
class Capture:
"""Captured output and/or exception."""
def __init__(self):
self.stdout = StringIO()
self.stderr = StringIO()
def capture_report(path, capture, messages):
"""Report on captured output.
:type path: str
:type capture: Capture
:type messages: set[str]
"""
if capture.stdout.getvalue():
first = capture.stdout.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stdout', first, messages)
if capture.stderr.getvalue():
first = capture.stderr.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stderr', first, messages)
def report_message(path, line, column, code, message, messages):
"""Report message if not already reported.
:type path: str
:type line: int
:type column: int
:type code: str
:type message: str
:type messages: set[str]
"""
message = '%s:%d:%d: %s: %s' % (path, line, column, code, message)
if message not in messages:
messages.add(message)
print(message)
@contextlib.contextmanager
def restrict_imports(path, name, messages, restrict_to_module_paths):
"""Restrict available imports.
:type path: str
:type name: str
:type messages: set[str]
:type restrict_to_module_paths: bool
"""
restricted_loader = RestrictedModuleLoader(path, name, restrict_to_module_paths)
# noinspection PyTypeChecker
sys.meta_path.insert(0, restricted_loader)
sys.path_importer_cache.clear()
try:
yield
finally:
if import_type == 'plugin' and not collection_loader:
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
_AnsibleCollectionFinder._remove() # pylint: disable=protected-access
if sys.meta_path[0] != restricted_loader:
report_message(path, 0, 0, 'metapath', 'changes to sys.meta_path[0] are not permitted', messages)
while restricted_loader in sys.meta_path:
# noinspection PyTypeChecker
sys.meta_path.remove(restricted_loader)
sys.path_importer_cache.clear()
@contextlib.contextmanager
def monitor_sys_modules(path, messages):
"""Monitor sys.modules for unwanted changes, reverting any additions made to our own namespaces."""
snapshot = sys.modules.copy()
try:
yield
finally:
check_sys_modules(path, snapshot, messages)
for key in set(sys.modules.keys()) - set(snapshot.keys()):
if is_name_in_namepace(key, ('ansible', 'ansible_collections')):
del sys.modules[key] # only unload our own code since we know it's native Python
@contextlib.contextmanager
def capture_output(capture):
"""Capture sys.stdout and sys.stderr.
:type capture: Capture
"""
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = capture.stdout
sys.stderr = capture.stderr
# clear all warnings registries to make all warnings available
for module in sys.modules.values():
try:
# noinspection PyUnresolvedReferences
module.__warningregistry__.clear()
except AttributeError:
pass
with warnings.catch_warnings():
warnings.simplefilter('error')
if collection_loader and import_type == 'plugin':
warnings.filterwarnings(
"ignore",
"AnsibleCollectionFinder has already been configured")
if sys.version_info[0] == 2:
warnings.filterwarnings(
"ignore",
"Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography,"
" and will be removed in a future release.")
warnings.filterwarnings(
"ignore",
"Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography,"
" and will be removed in the next release.")
if sys.version_info[:2] == (3, 5):
warnings.filterwarnings(
"ignore",
"Python 3.5 support will be dropped in the next release ofcryptography. Please upgrade your Python.")
warnings.filterwarnings(
"ignore",
"Python 3.5 support will be dropped in the next release of cryptography. Please upgrade your Python.")
if sys.version_info >= (3, 10):
# Temporary solution for Python 3.10 until find_spec is implemented in RestrictedModuleLoader.
# That implementation is dependent on find_spec being added to the controller's collection loader first.
# The warning text is: main.<locals>.RestrictedModuleLoader.find_spec() not found; falling back to find_module()
warnings.filterwarnings(
"ignore",
r"main\.<locals>\.RestrictedModuleLoader\.find_spec\(\) not found; falling back to find_module\(\)",
)
# Temporary solution for Python 3.10 until exec_module is implemented in RestrictedModuleLoader.
# That implementation is dependent on exec_module being added to the controller's collection loader first.
# The warning text is: main.<locals>.RestrictedModuleLoader.exec_module() not found; falling back to load_module()
warnings.filterwarnings(
"ignore",
r"main\.<locals>\.RestrictedModuleLoader\.exec_module\(\) not found; falling back to load_module\(\)",
)
# Temporary solution for Python 3.10 until find_spec is implemented in the controller's collection loader.
warnings.filterwarnings(
"ignore",
r"_Ansible.*Finder\.find_spec\(\) not found; falling back to find_module\(\)",
)
# Temporary solution for Python 3.10 until exec_module is implemented in the controller's collection loader.
warnings.filterwarnings(
"ignore",
r"_Ansible.*Loader\.exec_module\(\) not found; falling back to load_module\(\)",
)
try:
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
run(import_type == 'module')
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,658 |
Update import test for Python 3.10
|
##### SUMMARY
Implement `find_spec` and `exec_module` in the `RestrictedModuleLoader` for Python 3.10 and later. This depends on updating the collection loader first.
See also: https://github.com/ansible/ansible/issues/72952
Depends on: https://github.com/ansible/ansible/issues/74660
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
test/lib/ansible_test/_data/sanity/import/importer.py
|
https://github.com/ansible/ansible/issues/74658
|
https://github.com/ansible/ansible/pull/76427
|
f96a661adaf627764fc3527cab1ca4d829d69ea1
|
2769f5621b1d91a6eceb14d671ba9f0d07db6825
| 2021-05-11T16:13:20Z |
python
| 2022-03-23T20:55:37Z |
test/sanity/ignore.txt
|
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
docs/docsite/rst/locales/ja/LC_MESSAGES/dev_guide.po no-smart-quotes # Translation of the no-smart-quotes rule
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/galaxy.py import-3.8 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.8 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.8 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.8 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.8 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/providers.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/reporters.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/resolvers.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/versioning.py import-3.8 # circular imports
lib/ansible/cli/galaxy.py import-3.9 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.9 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.9 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.9 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.9 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/providers.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/reporters.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/resolvers.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/versioning.py import-3.9 # circular imports
lib/ansible/cli/galaxy.py import-3.10 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.10 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.10 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.10 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.10 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/providers.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/reporters.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/resolvers.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/versioning.py import-3.10 # circular imports
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:disallowed-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:disallowed-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/modules/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:disallowed-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:disallowed-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-missing-type
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/iptables.py pylint:disallowed-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:disallowed-name
lib/ansible/modules/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:disallowed-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:disallowed-name
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pylint:using-constant-test # bundled code we don't want to modify
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:disallowed-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:arguments-renamed
lib/ansible/module_utils/urls.py pylint:disallowed-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/parsing/vault/__init__.py pylint:disallowed-name
lib/ansible/parsing/yaml/objects.py pylint:arguments-renamed
lib/ansible/playbook/base.py pylint:disallowed-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:disallowed-name
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/__init__.py pylint:arguments-renamed
lib/ansible/plugins/inventory/advanced_host_list.py pylint:arguments-renamed
lib/ansible/plugins/inventory/host_list.py pylint:arguments-renamed
lib/ansible/plugins/lookup/random_choice.py pylint:arguments-renamed
lib/ansible/plugins/lookup/sequence.py pylint:disallowed-name
lib/ansible/plugins/shell/cmd.py pylint:arguments-renamed
lib/ansible/plugins/strategy/__init__.py pylint:disallowed-name
lib/ansible/plugins/strategy/linear.py pylint:disallowed-name
lib/ansible/utils/collection_loader/_collection_finder.py pylint:deprecated-class
lib/ansible/utils/collection_loader/_collection_meta.py pylint:deprecated-class
lib/ansible/vars/hostvars.py pylint:disallowed-name
test/integration/targets/ansible-test/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:disallowed-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/lib/ansible_test/_data/requirements/sanity.pslint.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_util/target/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_util/target/setup/requirements.py replace-urlopen
test/support/integration/plugins/inventory/aws_ec2.py pylint:use-a-generator
test/support/integration/plugins/modules/ec2_group.py pylint:use-a-generator
test/support/integration/plugins/modules/timezone.py pylint:disallowed-name
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate
test/support/integration/plugins/module_utils/cloud.py pylint:isinstance-second-argument-not-valid-type
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/filter/network.py pylint:consider-using-dict-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/cliconf/vyos.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:disallowed-name
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/module_utils/WebRequest.psm1 pslint!skip
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/modules/win_uri.ps1 pslint!skip
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/slurp.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_acl.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_certificate_store.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_command.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_file.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_get_url.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_stat.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_tempfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user_right.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_whoami.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:disallowed-name
test/units/modules/test_apt.py pylint:disallowed-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:disallowed-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/parsing/vault/test_vault.py pylint:disallowed-name
test/units/playbook/role/test_role.py pylint:disallowed-name
test/units/plugins/test_plugins.py pylint:disallowed-name
test/units/template/test_templar.py pylint:disallowed-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
lib/ansible/module_utils/six/__init__.py mypy-2.7:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:has-type # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:name-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:assignment # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:misc # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-2.7:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.5:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.6:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.7:attr-defined # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.8:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.9:var-annotated # vendored code
lib/ansible/module_utils/six/__init__.py mypy-3.10:var-annotated # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-2.7:arg-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.5:valid-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.6:valid-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.7:valid-type # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-2.7:assignment # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-2.7:attr-defined # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.5:attr-defined # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.6:attr-defined # vendored code
lib/ansible/module_utils/distro/_distro.py mypy-3.7:attr-defined # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-2.7:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.5:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.6:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.7:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.8:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.9:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.10:misc # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-2.7:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.5:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.6:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.7:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.8:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.9:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-3.10:assignment # vendored code
lib/ansible/module_utils/compat/_selectors2.py mypy-2.7:attr-defined # vendored code
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,316 |
Ansible.ModuleUtils.SID.psm1: Function Convert-ToSID fails if principal has more than 20 chars
|
### Summary
The function Convert-ToSID from Ansible.ModuleUtils.SID.psm1 states, that it can handle UPN as input. This is true as long as the principal matches the SAMAccountName. If a princial is longer than 20 chars the SAMAccountName is truncated to 20 chars in AD.
Translate([System.Security.Principal.SecurityIdentifier]) then fails.
Workaround: If one uses the SAMAccountName instead of the principal, the function works correctly.
### Issue Type
Bug Report
### Component Name
Ansible.ModuleUtils.SID.psm1
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.7]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.12 (default, Jan 29 2022, 05:15:52) [GCC 10.2.1 20210110]
jinja version = 3.0.3
libyaml = True
Affected code is the latest repo version of https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell/Ansible.ModuleUtils.SID.psm1
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Targets are Windows Systems on 2016 and 2019. The bug is in the powershell code.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```powershell
# Import Ansible.ModuleUtils.SID.psm1
PS C:\Users\test> Convert-ToSID -account_name [email protected]
Convert-ToSID : account_name [email protected] is not a valid account, cannot get SID: Exception calling "Translate" with "1"
argument(s): "Some or all identity references could not be translated."
At line:1 char:1
+ Convert-ToSID -account_name [email protected]
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Convert-ToSID
or just
$account = New-Object System.Security.Principal.NTAccount($domain, $username)
$account.Translate([System.Security.Principal.SecurityIdentifier])
with proper inputs for $domain and $username
```
Account details:
```powershell
PS C:\Users\test> Get-ADUser -Filter {Name -like "*thisisalongname12345678*"}
DistinguishedName : CN=thisisalongname12345678,OU=Service Accounts,DC=domain,DC=com
Enabled : True
GivenName :
Name : thisisalongname12345678
ObjectClass : user
ObjectGUID : bca42554-2785-4e07-a4bc-b255bfa63f2f
SamAccountName : thisisalongname12345
SID : S-1-5-21-776561741-1229272821-682004430-105809
Surname :
UserPrincipalName : [email protected]
```
### Expected Results
```
BinaryLength AccountDomainSid Value
------------ ---------------- -----
28 S-1-5-21-776561741-1229272821-682004430 S-1-5-21-776561741-1229272821-682004430-105809
```
This can be achieved by using the SAMAccountName in UPN format as a workaround:
```Convert-ToSID -account_name [email protected]```
### Actual Results
```console
Convert-ToSID : account_name [email protected] is not a valid account, cannot get SID: Exception calling "Translate" with "1"
argument(s): "Some or all identity references could not be translated."
At line:1 char:1
+ Convert-ToSID -account_name [email protected]
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Convert-ToSID
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77316
|
https://github.com/ansible/ansible/pull/77334
|
27aa2612d9d9a378ca2e1a20ec7787e2b93f90b5
|
ff184b0815cdbf7dc222fd9d7b0cfaa93d5fe03c
| 2022-03-18T08:52:31Z |
python
| 2022-03-24T19:01:45Z |
changelogs/fragments/ModuleUtils.SID-long-username.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,316 |
Ansible.ModuleUtils.SID.psm1: Function Convert-ToSID fails if principal has more than 20 chars
|
### Summary
The function Convert-ToSID from Ansible.ModuleUtils.SID.psm1 states, that it can handle UPN as input. This is true as long as the principal matches the SAMAccountName. If a princial is longer than 20 chars the SAMAccountName is truncated to 20 chars in AD.
Translate([System.Security.Principal.SecurityIdentifier]) then fails.
Workaround: If one uses the SAMAccountName instead of the principal, the function works correctly.
### Issue Type
Bug Report
### Component Name
Ansible.ModuleUtils.SID.psm1
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.7]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.12 (default, Jan 29 2022, 05:15:52) [GCC 10.2.1 20210110]
jinja version = 3.0.3
libyaml = True
Affected code is the latest repo version of https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell/Ansible.ModuleUtils.SID.psm1
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Targets are Windows Systems on 2016 and 2019. The bug is in the powershell code.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```powershell
# Import Ansible.ModuleUtils.SID.psm1
PS C:\Users\test> Convert-ToSID -account_name [email protected]
Convert-ToSID : account_name [email protected] is not a valid account, cannot get SID: Exception calling "Translate" with "1"
argument(s): "Some or all identity references could not be translated."
At line:1 char:1
+ Convert-ToSID -account_name [email protected]
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Convert-ToSID
or just
$account = New-Object System.Security.Principal.NTAccount($domain, $username)
$account.Translate([System.Security.Principal.SecurityIdentifier])
with proper inputs for $domain and $username
```
Account details:
```powershell
PS C:\Users\test> Get-ADUser -Filter {Name -like "*thisisalongname12345678*"}
DistinguishedName : CN=thisisalongname12345678,OU=Service Accounts,DC=domain,DC=com
Enabled : True
GivenName :
Name : thisisalongname12345678
ObjectClass : user
ObjectGUID : bca42554-2785-4e07-a4bc-b255bfa63f2f
SamAccountName : thisisalongname12345
SID : S-1-5-21-776561741-1229272821-682004430-105809
Surname :
UserPrincipalName : [email protected]
```
### Expected Results
```
BinaryLength AccountDomainSid Value
------------ ---------------- -----
28 S-1-5-21-776561741-1229272821-682004430 S-1-5-21-776561741-1229272821-682004430-105809
```
This can be achieved by using the SAMAccountName in UPN format as a workaround:
```Convert-ToSID -account_name [email protected]```
### Actual Results
```console
Convert-ToSID : account_name [email protected] is not a valid account, cannot get SID: Exception calling "Translate" with "1"
argument(s): "Some or all identity references could not be translated."
At line:1 char:1
+ Convert-ToSID -account_name [email protected]
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Convert-ToSID
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77316
|
https://github.com/ansible/ansible/pull/77334
|
27aa2612d9d9a378ca2e1a20ec7787e2b93f90b5
|
ff184b0815cdbf7dc222fd9d7b0cfaa93d5fe03c
| 2022-03-18T08:52:31Z |
python
| 2022-03-24T19:01:45Z |
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.SID.psm1
|
# Copyright (c) 2017 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
Function Convert-FromSID($sid) {
# Converts a SID to a Down-Level Logon name in the form of DOMAIN\UserName
# If the SID is for a local user or group then DOMAIN would be the server
# name.
$account_object = New-Object System.Security.Principal.SecurityIdentifier($sid)
try {
$nt_account = $account_object.Translate([System.Security.Principal.NTAccount])
}
catch {
Fail-Json -obj @{} -message "failed to convert sid '$sid' to a logon name: $($_.Exception.Message)"
}
return $nt_account.Value
}
Function Convert-ToSID {
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingEmptyCatchBlock", "",
Justification = "We don't care if converting to a SID fails, just that it failed or not")]
param($account_name)
# Converts an account name to a SID, it can take in the following forms
# SID: Will just return the SID value that was passed in
# UPN:
# principal@domain (Domain users only)
# Down-Level Login Name
# DOMAIN\principal (Domain)
# SERVERNAME\principal (Local)
# .\principal (Local)
# NT AUTHORITY\SYSTEM (Local Service Accounts)
# Login Name
# principal (Local/Local Service Accounts)
try {
$sid = New-Object -TypeName System.Security.Principal.SecurityIdentifier -ArgumentList $account_name
return $sid.Value
}
catch {}
if ($account_name -like "*\*") {
$account_name_split = $account_name -split "\\"
if ($account_name_split[0] -eq ".") {
$domain = $env:COMPUTERNAME
}
else {
$domain = $account_name_split[0]
}
$username = $account_name_split[1]
}
elseif ($account_name -like "*@*") {
$account_name_split = $account_name -split "@"
$domain = $account_name_split[1]
$username = $account_name_split[0]
}
else {
$domain = $null
$username = $account_name
}
if ($domain) {
# searching for a local group with the servername prefixed will fail,
# need to check for this situation and only use NTAccount(String)
if ($domain -eq $env:COMPUTERNAME) {
$adsi = [ADSI]("WinNT://$env:COMPUTERNAME,computer")
$group = $adsi.psbase.children | Where-Object { $_.schemaClassName -eq "group" -and $_.Name -eq $username }
}
else {
$group = $null
}
if ($group) {
$account = New-Object System.Security.Principal.NTAccount($username)
}
else {
$account = New-Object System.Security.Principal.NTAccount($domain, $username)
}
}
else {
# when in a domain NTAccount(String) will favour domain lookups check
# if username is a local user and explicitly search on the localhost for
# that account
$adsi = [ADSI]("WinNT://$env:COMPUTERNAME,computer")
$user = $adsi.psbase.children | Where-Object { $_.schemaClassName -eq "user" -and $_.Name -eq $username }
if ($user) {
$account = New-Object System.Security.Principal.NTAccount($env:COMPUTERNAME, $username)
}
else {
$account = New-Object System.Security.Principal.NTAccount($username)
}
}
try {
$account_sid = $account.Translate([System.Security.Principal.SecurityIdentifier])
}
catch {
Fail-Json @{} "account_name $account_name is not a valid account, cannot get SID: $($_.Exception.Message)"
}
return $account_sid.Value
}
# this line must stay at the bottom to ensure all defined module parts are exported
Export-ModuleMember -Alias * -Function * -Cmdlet *
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,368 |
Wrong python interpreter discovered in ansible core 2.13.0dev
|
### Summary
ansible [core 2.13.0.dev0] is giving wrong `discovered_interpreter_python`. Here is a comparison from same host for 2.12.1 and 2.13
```
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.13.0.dev0]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.3
libyaml = True
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible -i inventory.cluster automationcontroller -m setup | grep -i discovered_interpreter_python
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Platform linux on host xx.xx.xx.xx is using the discovered
Python interpreter at /usr/bin/python3.9, but future installation of another
Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible-
core/devel/reference_appendices/interpreter_discovery.html for more
information.
"discovered_interpreter_python": "/usr/bin/python3.9",
```
In 2.12.1
```
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible --version
ansible [core 2.12.1]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible -i inventory.cluster automationcontroller -m setup | grep -i discovered_interpreter_python
"discovered_interpreter_python": "/usr/libexec/platform-python",
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0.dev0]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
Try to run the setup module and look for `discovered_interpreter_python`
```
### Expected Results
```console
"discovered_interpreter_python": "/usr/libexec/platform-python"
```
### Actual Results
```console
"discovered_interpreter_python": "/usr/bin/python3.9"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77368
|
https://github.com/ansible/ansible/pull/77371
|
f03624e2957783e6b1ff6a005a978d945de59443
|
4723eb9caa12e1b99c4c411199bd4d4ab272534d
| 2022-03-28T13:36:36Z |
python
| 2022-03-28T16:47:09Z |
changelogs/fragments/77368-rhel-os-family-compat.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,368 |
Wrong python interpreter discovered in ansible core 2.13.0dev
|
### Summary
ansible [core 2.13.0.dev0] is giving wrong `discovered_interpreter_python`. Here is a comparison from same host for 2.12.1 and 2.13
```
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.13.0.dev0]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.3
libyaml = True
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible -i inventory.cluster automationcontroller -m setup | grep -i discovered_interpreter_python
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Platform linux on host xx.xx.xx.xx is using the discovered
Python interpreter at /usr/bin/python3.9, but future installation of another
Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible-
core/devel/reference_appendices/interpreter_discovery.html for more
information.
"discovered_interpreter_python": "/usr/bin/python3.9",
```
In 2.12.1
```
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible --version
ansible [core 2.12.1]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible -i inventory.cluster automationcontroller -m setup | grep -i discovered_interpreter_python
"discovered_interpreter_python": "/usr/libexec/platform-python",
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0.dev0]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
Try to run the setup module and look for `discovered_interpreter_python`
```
### Expected Results
```console
"discovered_interpreter_python": "/usr/libexec/platform-python"
```
### Actual Results
```console
"discovered_interpreter_python": "/usr/bin/python3.9"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77368
|
https://github.com/ansible/ansible/pull/77371
|
f03624e2957783e6b1ff6a005a978d945de59443
|
4723eb9caa12e1b99c4c411199bd4d4ab272534d
| 2022-03-28T13:36:36Z |
python
| 2022-03-28T16:47:09Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
import ansible.module_utils.compat.typing as t
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/centos-release', 'name': 'CentOS'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/os-release', 'name': 'Amazon'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
if path == '/etc/os-release':
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
distribution_version = version.group(1)
amazon_facts['distribution_version'] = distribution_version
version_data = distribution_version.split(".")
if len(version_data) > 1:
major, minor = version_data
else:
major, minor = version_data[0], 'NA'
amazon_facts['distribution_major_version'] = major
amazon_facts['distribution_minor_version'] = minor
else:
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
elif 'UOS' in data or 'Uos' in data or 'uos' in data:
debian_facts['distribution'] = 'Uos'
release = re.search(r"VERSION_CODENAME=\"?([^\"]+)\"?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
elif 'Deepin' in data or 'deepin' in data:
debian_facts['distribution'] = 'Deepin'
release = re.search(r"VERSION_CODENAME=\"?([^\"]+)\"?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() == 'flatcar':
if not data:
return False, flatcar_facts
release = re.search("^GROUP=(.*)", data)
if release:
flatcar_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, flatcar_facts
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
def parse_distribution_file_CentOS(self, name, data, path, collected_facts):
centos_facts = {}
if 'CentOS Stream' in data:
centos_facts['distribution_release'] = 'Stream'
return True, centos_facts
if "TencentOS Server" in data:
centos_facts['distribution'] = 'TencentOS'
return True, centos_facts
return False, centos_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler', 'AlmaLinux', 'Rocky', 'TencentOS',
'EuroLinux'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux', 'Uos', 'Deepin'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT|RC|PRERELEASE).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family']) # type: t.Set[str]
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,368 |
Wrong python interpreter discovered in ansible core 2.13.0dev
|
### Summary
ansible [core 2.13.0.dev0] is giving wrong `discovered_interpreter_python`. Here is a comparison from same host for 2.12.1 and 2.13
```
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.13.0.dev0]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.3
libyaml = True
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible -i inventory.cluster automationcontroller -m setup | grep -i discovered_interpreter_python
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Platform linux on host xx.xx.xx.xx is using the discovered
Python interpreter at /usr/bin/python3.9, but future installation of another
Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible-
core/devel/reference_appendices/interpreter_discovery.html for more
information.
"discovered_interpreter_python": "/usr/bin/python3.9",
```
In 2.12.1
```
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible --version
ansible [core 2.12.1]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
[root@perfscale-jenkins-max-control-11-exec-controlplane-1-vsi ansible-tower-setup]# ansible -i inventory.cluster automationcontroller -m setup | grep -i discovered_interpreter_python
"discovered_interpreter_python": "/usr/libexec/platform-python",
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0.dev0]
config file = /root/ansible-tower-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Mar 5 2021, 01:49:45) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
Try to run the setup module and look for `discovered_interpreter_python`
```
### Expected Results
```console
"discovered_interpreter_python": "/usr/libexec/platform-python"
```
### Actual Results
```console
"discovered_interpreter_python": "/usr/bin/python3.9"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77368
|
https://github.com/ansible/ansible/pull/77371
|
f03624e2957783e6b1ff6a005a978d945de59443
|
4723eb9caa12e1b99c4c411199bd4d4ab272534d
| 2022-03-28T13:36:36Z |
python
| 2022-03-28T16:47:09Z |
test/integration/targets/interpreter_discovery_python/tasks/main.yml
|
- name: ensure we can override ansible_python_interpreter
vars:
ansible_python_interpreter: overriddenpython
assert:
that:
- ansible_python_interpreter == 'overriddenpython'
fail_msg: "'ansible_python_interpreter' appears to be set at a high precedence to {{ ansible_python_interpreter }},
which breaks this test."
- name: snag some facts to validate for later
set_fact:
distro: '{{ ansible_distribution | default("unknown") | lower }}'
distro_version: '{{ ansible_distribution_version | default("unknown") }}'
os_family: '{{ ansible_os_family | default("unknown") }}'
- name: test that python discovery is working and that fact persistence makes it only run once
block:
- name: clear facts to force interpreter discovery to run
meta: clear_facts
- name: trigger discovery with auto
vars:
ansible_python_interpreter: auto
ping:
register: auto_out
- name: get the interpreter being used on the target to execute modules
vars:
# keep this set so we can verify we didn't repeat discovery
ansible_python_interpreter: auto
test_echo_module:
register: echoout
- name: clear facts to force interpreter discovery to run again
meta: clear_facts
- name: get the interpreter being used on the target to execute modules with ansible_facts
vars:
# keep this set so we can verify we didn't repeat discovery
ansible_python_interpreter: auto
test_echo_module:
facts:
sandwich: ham
register: echoout_with_facts
- when: distro == 'macosx'
block:
- name: Get the sys.executable for the macos discovered interpreter, as it may be different than the actual path
raw: '{{ auto_out.ansible_facts.discovered_interpreter_python }} -c "import sys; print(sys.executable)"'
register: discovered_sys_executable
- set_fact:
normalized_discovered_interpreter: '{{ discovered_sys_executable.stdout_lines[0] }}'
- set_fact:
normalized_discovered_interpreter: '{{ auto_out.ansible_facts.discovered_interpreter_python }}'
when: distro != 'macosx'
- assert:
that:
- auto_out.ansible_facts.discovered_interpreter_python is defined
- echoout.running_python_interpreter == normalized_discovered_interpreter
# verify that discovery didn't run again (if it did, we'd have the fact in the result)
- echoout.ansible_facts is not defined or echoout.ansible_facts.discovered_interpreter_python is not defined
- echoout_with_facts.ansible_facts is defined
- echoout_with_facts.running_python_interpreter == normalized_discovered_interpreter
- name: test that auto_legacy gives a dep warning when /usr/bin/python present but != auto result
block:
- name: clear facts to force interpreter discovery to run
meta: clear_facts
- name: trigger discovery with auto_legacy
vars:
ansible_python_interpreter: auto_legacy
ping:
register: legacy
- name: check for warning (only on platforms where auto result is not /usr/bin/python and legacy is)
assert:
that:
- legacy.warnings | default([]) | length > 0
# only check for a dep warning if legacy returned /usr/bin/python and auto didn't
when: legacy.ansible_facts.discovered_interpreter_python == '/usr/bin/python' and
auto_out.ansible_facts.discovered_interpreter_python != '/usr/bin/python'
- name: test that auto_silent never warns and got the same answer as auto
block:
- name: clear facts to force interpreter discovery to run
meta: clear_facts
- name: initial task to trigger discovery
vars:
ansible_python_interpreter: auto_silent
ping:
register: auto_silent_out
- assert:
that:
- auto_silent_out.warnings is not defined
- auto_silent_out.ansible_facts.discovered_interpreter_python == auto_out.ansible_facts.discovered_interpreter_python
- name: test that auto_legacy_silent never warns and got the same answer as auto_legacy
block:
- name: clear facts to force interpreter discovery to run
meta: clear_facts
- name: trigger discovery with auto_legacy_silent
vars:
ansible_python_interpreter: auto_legacy_silent
ping:
register: legacy_silent
- assert:
that:
- legacy_silent.warnings is not defined
- legacy_silent.ansible_facts.discovered_interpreter_python == legacy.ansible_facts.discovered_interpreter_python
- name: ensure modules can't set discovered_interpreter_X or ansible_X_interpreter
block:
- test_echo_module:
facts:
ansible_discovered_interpreter_bogus: from module
discovered_interpreter_bogus: from_module
ansible_bogus_interpreter: from_module
test_fact: from_module
register: echoout
- assert:
that:
- test_fact == 'from_module'
- discovered_interpreter_bogus | default('nope') == 'nope'
- ansible_bogus_interpreter | default('nope') == 'nope'
# this one will exist in facts, but with its prefix removed
- ansible_facts['ansible_bogus_interpreter'] | default('nope') == 'nope'
- ansible_facts['discovered_interpreter_bogus'] | default('nope') == 'nope'
- name: debian assertions
assert:
that:
# Debian 8 and older
- auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python' and distro_version is version('8', '<=') or distro_version is version('8', '>')
# Debian 10 and newer
- auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python3' and distro_version is version('10', '>=') or distro_version is version('10', '<')
when: distro == 'debian'
- name: fedora assertions
assert:
that:
- auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python3'
when: distro == 'fedora' and distro_version is version('23', '>=')
- name: rhel assertions
assert:
that:
# rhel 6/7
- (auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python' and distro_version is version('8','<')) or distro_version is version('8','>=')
# rhel 8
- (auto_out.ansible_facts.discovered_interpreter_python == '/usr/libexec/platform-python' and distro_version is version('8','==')) or distro_version is version('8','!=')
# rhel 9
- (auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python3' and distro_version is version('9','==')) or distro_version is version('9','!=')
when: distro == 'redhat'
- name: ubuntu assertions
assert:
that:
# ubuntu < 16
- (auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python' and distro_version is version('16.04','<')) or distro_version is version('16.04','>=')
# ubuntu >= 16
- (auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python3' and distro_version is version('16.04','>=')) or distro_version is version('16.04','<')
when: distro == 'ubuntu'
- name: mac assertions
assert:
that:
- auto_out.ansible_facts.discovered_interpreter_python == '/usr/bin/python'
when: os_family == 'darwin'
always:
- meta: clear_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,349 |
`ansible-galaxy role install` fails if signatures are defined for collections
|
### Summary
If a galaxy requirements file contains collections that have signatures defined, then `ansible-galaxy role install` with that requirements file fails because a keyring is not supplied.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0.dev0] (devel ff184b0815) last updated 2022/03/24 16:21:31 (GMT -400)
config file = /home/shrews/.ansible.cfg
configured module search path = ['/home/shrews/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/shrews/Devel/github/Shrews/ansible/lib/ansible
ansible collection location = /home/shrews/.ansible/collections:/usr/share/ansible/collections
executable location = /home/shrews/Devel/github/Shrews/ansible/bin/ansible
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.1.0
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_HOST_LIST(/home/shrews/.ansible.cfg) = ['/home/shrews/Devel/ansible/hosts.yaml']
```
### OS / Environment
Ubuntu 21.10
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
$ cat req.yml
collections:
- name: community.mysql
signatures:
- file:///home/shrews/test.asc
$ ansible-galaxy role install -r req.yml
ERROR! Signatures were provided to verify community.mysql but no keyring was configured.
```
### Expected Results
`role install` should ignore collection signatures
### Actual Results
```console
$ ansible-galaxy role install -r req.yml -vvvv
ansible-galaxy [core 2.13.0.dev0] (devel ff184b0815) last updated 2022/03/24 16:21:31 (GMT -400)
config file = /home/shrews/.ansible.cfg
configured module search path = ['/home/shrews/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/shrews/Devel/github/Shrews/ansible/lib/ansible
ansible collection location = /home/shrews/.ansible/collections:/usr/share/ansible/collections
executable location = /home/shrews/Devel/github/Shrews/ansible/bin/ansible-galaxy
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.1.0
libyaml = True
Using /home/shrews/.ansible.cfg as config file
Reading requirement file at '/home/shrews/req.yml'
ERROR! Signatures were provided to verify community.mysql but no keyring was configured.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77349
|
https://github.com/ansible/ansible/pull/77355
|
50d4cf931c0e329576599ab0fb1474d36a24c1d6
|
87d52e0ce02bfa5b7db8e98cfcdda4cd494b8791
| 2022-03-24T20:39:56Z |
python
| 2022-03-29T15:03:43Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import re
import shutil
import sys
import textwrap
import time
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
SERVER_DEF = [
('url', True),
('username', False),
('password', False),
('token', False),
('auth_url', False),
('v3', False),
('validate_certs', False),
('client_id', False),
]
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
artifacts_manager_kwargs = {'validate_certs': not context.CLIARGS['ignore_certs']}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError("{value} is not a valid signature count value")
return value
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self._api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=AnsibleCollectionConfig.collection_paths,
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
if self._implicit_role and ('-r' in self._raw_args or '--role-file' in self._raw_args):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
validate_certs_fallback = not context.CLIARGS['ignore_certs']
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
client_id = server_options.pop('client_id', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
available_api_versions = None
v3 = server_options.pop('v3', None)
validate_certs = server_options['validate_certs']
if validate_certs is None:
validate_certs = validate_certs_fallback
server_options['validate_certs'] = validate_certs
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs_fallback,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs_fallback,
**galaxy_options
))
return context.CLIARGS['func']()
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=not context.CLIARGS['ignore_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
galaxy_args = self._raw_args
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
output_format = context.CLIARGS['output_format']
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = AnsibleCollectionConfig.collection_paths
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
try:
collection = Requirement.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
six.raise_from(AnsibleError(val_err), val_err)
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver}
}
continue
fqcn_width, version_width = _get_collection_widths([collection])
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = list(find_existing_collections(
collection_path, artifacts_manager,
))
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
for collection in sorted(collections, key=to_text):
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,349 |
`ansible-galaxy role install` fails if signatures are defined for collections
|
### Summary
If a galaxy requirements file contains collections that have signatures defined, then `ansible-galaxy role install` with that requirements file fails because a keyring is not supplied.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0.dev0] (devel ff184b0815) last updated 2022/03/24 16:21:31 (GMT -400)
config file = /home/shrews/.ansible.cfg
configured module search path = ['/home/shrews/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/shrews/Devel/github/Shrews/ansible/lib/ansible
ansible collection location = /home/shrews/.ansible/collections:/usr/share/ansible/collections
executable location = /home/shrews/Devel/github/Shrews/ansible/bin/ansible
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.1.0
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_HOST_LIST(/home/shrews/.ansible.cfg) = ['/home/shrews/Devel/ansible/hosts.yaml']
```
### OS / Environment
Ubuntu 21.10
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
$ cat req.yml
collections:
- name: community.mysql
signatures:
- file:///home/shrews/test.asc
$ ansible-galaxy role install -r req.yml
ERROR! Signatures were provided to verify community.mysql but no keyring was configured.
```
### Expected Results
`role install` should ignore collection signatures
### Actual Results
```console
$ ansible-galaxy role install -r req.yml -vvvv
ansible-galaxy [core 2.13.0.dev0] (devel ff184b0815) last updated 2022/03/24 16:21:31 (GMT -400)
config file = /home/shrews/.ansible.cfg
configured module search path = ['/home/shrews/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/shrews/Devel/github/Shrews/ansible/lib/ansible
ansible collection location = /home/shrews/.ansible/collections:/usr/share/ansible/collections
executable location = /home/shrews/Devel/github/Shrews/ansible/bin/ansible-galaxy
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.1.0
libyaml = True
Using /home/shrews/.ansible.cfg as config file
Reading requirement file at '/home/shrews/req.yml'
ERROR! Signatures were provided to verify community.mysql but no keyring was configured.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77349
|
https://github.com/ansible/ansible/pull/77355
|
50d4cf931c0e329576599ab0fb1474d36a24c1d6
|
87d52e0ce02bfa5b7db8e98cfcdda4cd494b8791
| 2022-03-24T20:39:56Z |
python
| 2022-03-29T15:03:43Z |
lib/ansible/galaxy/dependency_resolution/dataclasses.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Dependency structs."""
# FIXME: add caching all over the place
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import typing as t
from collections import namedtuple
from collections.abc import MutableSequence, MutableMapping
from glob import iglob
from urllib.parse import urlparse
from yaml import safe_load
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
Collection = t.TypeVar(
'Collection',
'Candidate', 'Requirement',
'_ComputedReqKindsMixin',
)
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
_ALLOW_CONCRETE_POINTER_IN_SOURCE = False # NOTE: This is a feature flag
_GALAXY_YAML = b'galaxy.yml'
_MANIFEST_JSON = b'MANIFEST.json'
_SOURCE_METADATA_FILE = b'GALAXY.yml'
display = Display()
def get_validated_source_info(b_source_info_path, namespace, name, version):
source_info_path = to_text(b_source_info_path, errors='surrogate_or_strict')
if not os.path.isfile(b_source_info_path):
return None
try:
with open(b_source_info_path, mode='rb') as fd:
metadata = safe_load(fd)
except OSError as e:
display.warning(
f"Error getting collection source information at '{source_info_path}': {to_text(e, errors='surrogate_or_strict')}"
)
return None
if not isinstance(metadata, MutableMapping):
display.warning(f"Error getting collection source information at '{source_info_path}': expected a YAML dictionary")
return None
schema_errors = _validate_v1_source_info_schema(namespace, name, version, metadata)
if schema_errors:
display.warning(f"Ignoring source metadata file at {source_info_path} due to the following errors:")
display.warning("\n".join(schema_errors))
display.warning("Correct the source metadata file by reinstalling the collection.")
return None
return metadata
def _validate_v1_source_info_schema(namespace, name, version, provided_arguments):
argument_spec_data = dict(
format_version=dict(choices=["1.0.0"]),
download_url=dict(),
version_url=dict(),
server=dict(),
signatures=dict(
type=list,
suboptions=dict(
signature=dict(),
pubkey_fingerprint=dict(),
signing_service=dict(),
pulp_created=dict(),
)
),
name=dict(choices=[name]),
namespace=dict(choices=[namespace]),
version=dict(choices=[version]),
)
if not isinstance(provided_arguments, dict):
raise AnsibleError(
f'Invalid offline source info for {namespace}.{name}:{version}, expected a dict and got {type(provided_arguments)}'
)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
return validation_result.error_messages
def _is_collection_src_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _GALAXY_YAML))
def _is_installed_collection_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _MANIFEST_JSON))
def _is_collection_dir(dir_path):
return (
_is_installed_collection_dir(dir_path) or
_is_collection_src_dir(dir_path)
)
def _find_collections_in_subdirs(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
subdir_glob_pattern = os.path.join(
b_dir_path,
# b'*', # namespace is supposed to be top-level per spec
b'*', # collection name
)
for subdir in iglob(subdir_glob_pattern):
if os.path.isfile(os.path.join(subdir, _MANIFEST_JSON)):
yield subdir
elif os.path.isfile(os.path.join(subdir, _GALAXY_YAML)):
yield subdir
def _is_collection_namespace_dir(tested_str):
return any(_find_collections_in_subdirs(tested_str))
def _is_file_path(tested_str):
return os.path.isfile(to_bytes(tested_str, errors='surrogate_or_strict'))
def _is_http_url(tested_str):
return urlparse(tested_str).scheme.lower() in {'http', 'https'}
def _is_git_url(tested_str):
return tested_str.startswith(('git+', 'git@'))
def _is_concrete_artifact_pointer(tested_str):
return any(
predicate(tested_str)
for predicate in (
# NOTE: Maintain the checks to be sorted from light to heavy:
_is_git_url,
_is_http_url,
_is_file_path,
_is_collection_dir,
_is_collection_namespace_dir,
)
)
class _ComputedReqKindsMixin:
def __init__(self, *args, **kwargs):
if not self.may_have_offline_galaxy_info:
self._source_info = None
else:
info_path = self.construct_galaxy_info_path(to_bytes(self.src, errors='surrogate_or_strict'))
self._source_info = get_validated_source_info(
info_path,
self.namespace,
self.name,
self.ver
)
@classmethod
def from_dir_path_as_unknown( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
art_mgr, # type: ConcreteArtifactsManager
): # type: (...) -> Collection
"""Make collection from an unspecified dir type.
This alternative constructor attempts to grab metadata from the
given path if it's a directory. If there's no metadata, it
falls back to guessing the FQCN based on the directory path and
sets the version to "*".
It raises a ValueError immediatelly if the input is not an
existing directory path.
"""
if not os.path.isdir(dir_path):
raise ValueError(
"The collection directory '{path!s}' doesn't exist".
format(path=to_native(dir_path)),
)
try:
return cls.from_dir_path(dir_path, art_mgr)
except ValueError:
return cls.from_dir_path_implicit(dir_path)
@classmethod
def from_dir_path(cls, dir_path, art_mgr):
"""Make collection from an directory with metadata."""
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
if not _is_collection_dir(b_dir_path):
display.warning(
u"Collection at '{path!s}' does not have a {manifest_json!s} "
u'file, nor has it {galaxy_yml!s}: cannot detect version.'.
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
'`dir_path` argument must be an installed or a source'
' collection directory.',
)
tmp_inst_req = cls(None, None, dir_path, 'dir', None)
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req)
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(req_name, req_version, dir_path, 'dir', None)
@classmethod
def from_dir_path_implicit( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
): # type: (...) -> Collection
"""Construct a collection instance based on an arbitrary dir.
This alternative constructor infers the FQCN based on the parent
and current directory names. It also sets the version to "*"
regardless of whether any of known metadata files are present.
"""
# There is no metadata, but it isn't required for a functional collection. Determine the namespace.name from the path.
u_dir_path = to_text(dir_path, errors='surrogate_or_strict')
path_list = u_dir_path.split(os.path.sep)
req_name = '.'.join(path_list[-2:])
return cls(req_name, '*', dir_path, 'dir', None) # type: ignore[call-arg]
@classmethod
def from_string(cls, collection_input, artifacts_manager, supplemental_signatures):
req = {}
if _is_concrete_artifact_pointer(collection_input):
# Arg is a file path or URL to a collection
req['name'] = collection_input
else:
req['name'], _sep, req['version'] = collection_input.partition(':')
if not req['version']:
del req['version']
req['signatures'] = supplemental_signatures
return cls.from_requirement_dict(req, artifacts_manager)
@classmethod
def from_requirement_dict(cls, collection_req, art_mgr):
req_name = collection_req.get('name', None)
req_version = collection_req.get('version', '*')
req_type = collection_req.get('type')
# TODO: decide how to deprecate the old src API behavior
req_source = collection_req.get('source', None)
req_signature_sources = collection_req.get('signatures', None)
if req_signature_sources is not None:
if art_mgr.keyring is None:
raise AnsibleError(
f"Signatures were provided to verify {req_name} but no keyring was configured."
)
if not isinstance(req_signature_sources, MutableSequence):
req_signature_sources = [req_signature_sources]
req_signature_sources = frozenset(req_signature_sources)
if req_type is None:
if ( # FIXME: decide on the future behavior:
_ALLOW_CONCRETE_POINTER_IN_SOURCE
and req_source is not None
and _is_concrete_artifact_pointer(req_source)
):
src_path = req_source
elif (
req_name is not None
and AnsibleCollectionRef.is_valid_collection_name(req_name)
):
req_type = 'galaxy'
elif (
req_name is not None
and _is_concrete_artifact_pointer(req_name)
):
src_path, req_name = req_name, None
else:
dir_tip_tmpl = ( # NOTE: leading LFs are for concat
'\n\nTip: Make sure you are pointing to the right '
'subdirectory — `{src!s}` looks like a directory '
'but it is neither a collection, nor a namespace '
'dir.'
)
if req_source is not None and os.path.isdir(req_source):
tip = dir_tip_tmpl.format(src=req_source)
elif req_name is not None and os.path.isdir(req_name):
tip = dir_tip_tmpl.format(src=req_name)
elif req_name:
tip = '\n\nCould not find {0}.'.format(req_name)
else:
tip = ''
raise AnsibleError( # NOTE: I'd prefer a ValueError instead
'Neither the collection requirement entry key '
"'name', nor 'source' point to a concrete "
"resolvable collection artifact. Also 'name' is "
'not an FQCN. A valid collection name must be in '
'the format <namespace>.<collection>. Please make '
'sure that the namespace and the collection name '
' contain characters from [a-zA-Z0-9_] only.'
'{extra_tip!s}'.format(extra_tip=tip),
)
if req_type is None:
if _is_git_url(src_path):
req_type = 'git'
req_source = src_path
elif _is_http_url(src_path):
req_type = 'url'
req_source = src_path
elif _is_file_path(src_path):
req_type = 'file'
req_source = src_path
elif _is_collection_dir(src_path):
if _is_installed_collection_dir(src_path) and _is_collection_src_dir(src_path):
# Note that ``download`` requires a dir with a ``galaxy.yml`` and fails if it
# doesn't exist, but if a ``MANIFEST.json`` also exists, it would be used
# instead of the ``galaxy.yml``.
raise AnsibleError(
u"Collection requirement at '{path!s}' has both a {manifest_json!s} "
u"file and a {galaxy_yml!s}.\nThe requirement must either be an installed "
u"collection directory or a source collection directory, not both.".
format(
path=to_text(src_path, errors='surrogate_or_strict'),
manifest_json=to_text(_MANIFEST_JSON),
galaxy_yml=to_text(_GALAXY_YAML),
)
)
req_type = 'dir'
req_source = src_path
elif _is_collection_namespace_dir(src_path):
req_name = None # No name for a virtual req or "namespace."?
req_type = 'subdirs'
req_source = src_path
else:
raise AnsibleError( # NOTE: this is never supposed to be hit
'Failed to automatically detect the collection '
'requirement type.',
)
if req_type not in {'file', 'galaxy', 'git', 'url', 'dir', 'subdirs'}:
raise AnsibleError(
"The collection requirement entry key 'type' must be "
'one of file, galaxy, git, dir, subdirs, or url.'
)
if req_name is None and req_type == 'galaxy':
raise AnsibleError(
'Collections requirement entry should contain '
"the key 'name' if it's requested from a Galaxy-like "
'index server.',
)
if req_type != 'galaxy' and req_source is None:
req_source, req_name = req_name, None
if (
req_type == 'galaxy' and
isinstance(req_source, GalaxyAPI) and
not _is_http_url(req_source.api_server)
):
raise AnsibleError(
"Collections requirement 'source' entry should contain "
'a valid Galaxy API URL but it does not: {not_url!s} '
'is not an HTTP URL.'.
format(not_url=req_source.api_server),
)
tmp_inst_req = cls(req_name, req_version, req_source, req_type, req_signature_sources)
if req_type not in {'galaxy', 'subdirs'} and req_name is None:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req) # TODO: fix the cache key in artifacts manager?
if req_type not in {'galaxy', 'subdirs'} and req_version == '*':
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(
req_name, req_version,
req_source, req_type,
req_signature_sources,
)
def __repr__(self):
return (
'<{self!s} of type {coll_type!r} from {src!s}>'.
format(self=self, coll_type=self.type, src=self.src or 'Galaxy')
)
def __str__(self):
return to_native(self.__unicode__())
def __unicode__(self):
if self.fqcn is None:
return (
u'"virtual collection Git repo"' if self.is_scm
else u'"virtual collection namespace"'
)
return (
u'{fqcn!s}:{ver!s}'.
format(fqcn=to_text(self.fqcn), ver=to_text(self.ver))
)
@property
def may_have_offline_galaxy_info(self):
if self.fqcn is None:
# Virtual collection
return False
elif not self.is_dir or self.src is None or not _is_collection_dir(self.src):
# Not a dir or isn't on-disk
return False
return True
def construct_galaxy_info_path(self, b_collection_path):
if not self.may_have_offline_galaxy_info and not self.type == 'galaxy':
raise TypeError('Only installed collections from a Galaxy server have offline Galaxy info')
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
b_src = to_bytes(b_collection_path, errors='surrogate_or_strict')
b_path_parts = b_src.split(to_bytes(os.path.sep))[0:-2]
b_metadata_dir = to_bytes(os.path.sep).join(b_path_parts)
# ns.coll-1.0.0.info
b_dir_name = to_bytes(f"{self.namespace}.{self.name}-{self.ver}.info", errors="surrogate_or_strict")
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
return os.path.join(b_metadata_dir, b_dir_name, _SOURCE_METADATA_FILE)
def _get_separate_ns_n_name(self): # FIXME: use LRU cache
return self.fqcn.split('.')
@property
def namespace(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a namespace')
return self._get_separate_ns_n_name()[0]
@property
def name(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a name')
return self._get_separate_ns_n_name()[-1]
@property
def canonical_package_id(self):
if not self.is_virtual:
return to_native(self.fqcn)
return (
'<virtual namespace from {src!s} of type {src_type!s}>'.
format(src=to_native(self.src), src_type=to_native(self.type))
)
@property
def is_virtual(self):
return self.is_scm or self.is_subdirs
@property
def is_file(self):
return self.type == 'file'
@property
def is_dir(self):
return self.type == 'dir'
@property
def namespace_collection_paths(self):
return [
to_native(path)
for path in _find_collections_in_subdirs(self.src)
]
@property
def is_subdirs(self):
return self.type == 'subdirs'
@property
def is_url(self):
return self.type == 'url'
@property
def is_scm(self):
return self.type == 'git'
@property
def is_concrete_artifact(self):
return self.type in {'git', 'url', 'file', 'dir', 'subdirs'}
@property
def is_online_index_pointer(self):
return not self.is_concrete_artifact
@property
def source_info(self):
return self._source_info
RequirementNamedTuple = namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type', 'signature_sources')) # type: ignore[name-match]
CandidateNamedTuple = namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type', 'signatures')) # type: ignore[name-match]
class Requirement(
_ComputedReqKindsMixin,
RequirementNamedTuple,
):
"""An abstract requirement request."""
def __new__(cls, *args, **kwargs):
self = RequirementNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Requirement, self).__init__()
class Candidate(
_ComputedReqKindsMixin,
CandidateNamedTuple,
):
"""A concrete collection candidate with its version resolved."""
def __new__(cls, *args, **kwargs):
self = CandidateNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Candidate, self).__init__()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,349 |
`ansible-galaxy role install` fails if signatures are defined for collections
|
### Summary
If a galaxy requirements file contains collections that have signatures defined, then `ansible-galaxy role install` with that requirements file fails because a keyring is not supplied.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0.dev0] (devel ff184b0815) last updated 2022/03/24 16:21:31 (GMT -400)
config file = /home/shrews/.ansible.cfg
configured module search path = ['/home/shrews/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/shrews/Devel/github/Shrews/ansible/lib/ansible
ansible collection location = /home/shrews/.ansible/collections:/usr/share/ansible/collections
executable location = /home/shrews/Devel/github/Shrews/ansible/bin/ansible
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.1.0
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_HOST_LIST(/home/shrews/.ansible.cfg) = ['/home/shrews/Devel/ansible/hosts.yaml']
```
### OS / Environment
Ubuntu 21.10
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
$ cat req.yml
collections:
- name: community.mysql
signatures:
- file:///home/shrews/test.asc
$ ansible-galaxy role install -r req.yml
ERROR! Signatures were provided to verify community.mysql but no keyring was configured.
```
### Expected Results
`role install` should ignore collection signatures
### Actual Results
```console
$ ansible-galaxy role install -r req.yml -vvvv
ansible-galaxy [core 2.13.0.dev0] (devel ff184b0815) last updated 2022/03/24 16:21:31 (GMT -400)
config file = /home/shrews/.ansible.cfg
configured module search path = ['/home/shrews/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/shrews/Devel/github/Shrews/ansible/lib/ansible
ansible collection location = /home/shrews/.ansible/collections:/usr/share/ansible/collections
executable location = /home/shrews/Devel/github/Shrews/ansible/bin/ansible-galaxy
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 3.1.0
libyaml = True
Using /home/shrews/.ansible.cfg as config file
Reading requirement file at '/home/shrews/req.yml'
ERROR! Signatures were provided to verify community.mysql but no keyring was configured.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77349
|
https://github.com/ansible/ansible/pull/77355
|
50d4cf931c0e329576599ab0fb1474d36a24c1d6
|
87d52e0ce02bfa5b7db8e98cfcdda4cd494b8791
| 2022-03-24T20:39:56Z |
python
| 2022-03-29T15:03:43Z |
test/integration/targets/ansible-galaxy-collection/tasks/install.yml
|
---
- name: create test collection install directory - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: directory
- name: install simple collection from first accessible server
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install simple collection with implicit path - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_normal
- name: get installed files of install simple collection with implicit path - {{ test_name }}
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection with implicit path - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection with implicit path - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_normal.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: install existing without --force - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_no_force
- name: assert install existing without --force - {{ test_name }}
assert:
that:
- '"Nothing to do. All requested collections are already installed" in install_existing_no_force.stdout'
- name: install existing with --force - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' --force {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_force
- name: assert install existing with --force - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_existing_force.stdout'
- name: remove test installed collection - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install pre-release as explicit version to custom dir - {{ test_name }}
command: ansible-galaxy collection install 'namespace1.name1:1.1.0-beta.1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release as explicit version to custom dir - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release as explicit version to custom dir - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: Remove beta
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
- name: install pre-release version with --pre to custom dir - {{ test_name }}
command: ansible-galaxy collection install --pre 'namespace1.name1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release version with --pre to custom dir - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release version with --pre to custom dir - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: install multiple collections with dependencies - {{ test_name }}
command: ansible-galaxy collection install parent_dep.parent_collection:1.0.0 namespace2.name -s {{ test_name }} {{ galaxy_verbosity }}
args:
chdir: '{{ galaxy_dir }}/ansible_collections'
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
register: install_multiple_with_dep
- name: get result of install multiple collections with dependencies - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection.namespace }}/{{ collection.name }}/MANIFEST.json'
register: install_multiple_with_dep_actual
loop_control:
loop_var: collection
loop:
- namespace: namespace2
name: name
- namespace: parent_dep
name: parent_collection
- namespace: child_dep
name: child_collection
- namespace: child_dep
name: child_dep2
- name: assert install multiple collections with dependencies - {{ test_name }}
assert:
that:
- (install_multiple_with_dep_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[2].content | b64decode | from_json).collection_info.version == '0.9.9'
- (install_multiple_with_dep_actual.results[3].content | b64decode | from_json).collection_info.version == '1.2.2'
- name: expect failure with dep resolution failure
command: ansible-galaxy collection install fail_namespace.fail_collection -s {{ test_name }} {{ galaxy_verbosity }}
register: fail_dep_mismatch
failed_when:
- '"Could not satisfy the following requirements" not in fail_dep_mismatch.stderr'
- '" fail_dep2.name:<0.0.5 (dependency of fail_namespace.fail_collection:2.1.2)" not in fail_dep_mismatch.stderr'
- name: Find artifact url for namespace3.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace3/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: download a collection for an offline install - {{ test_name }}
get_url:
url: '{{ artifact_url_response.json.download_url }}'
dest: '{{ galaxy_dir }}/namespace3.tar.gz'
- name: install a collection from a tarball - {{ test_name }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/namespace3.tar.gz' {{ galaxy_verbosity }}
register: install_tarball
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a tarball - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace3/name/MANIFEST.json'
register: install_tarball_actual
- name: assert install a collection from a tarball - {{ test_name }}
assert:
that:
- '"Installing ''namespace3.name:1.0.0'' to" in install_tarball.stdout'
- (install_tarball_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: write a requirements file using the artifact and a conflicting version
copy:
content: |
collections:
- name: {{ galaxy_dir }}/namespace3.tar.gz
version: 1.2.0
dest: '{{ galaxy_dir }}/test_req.yml'
- name: install the requirements file with mismatched versions
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/test_req.yml' {{ galaxy_verbosity }}
ignore_errors: True
register: result
- name: remove the requirements file
file:
path: '{{ galaxy_dir }}/test_req.yml'
state: absent
- assert:
that: error == expected_error
vars:
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
error: "{{ result.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (direct request)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
- name: test error for mismatched dependency versions
vars:
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
error: "{{ result.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (dependency of tmp_parent.name:1.0.0)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
block:
- name: init a new parent collection
command: ansible-galaxy collection init tmp_parent.name --init-path '{{ galaxy_dir }}/scratch'
- name: replace the dependencies
lineinfile:
path: "{{ galaxy_dir }}/scratch/tmp_parent/name/galaxy.yml"
regexp: "^dependencies:*"
line: "dependencies: { '{{ galaxy_dir }}/namespace3.tar.gz': '1.2.0' }"
- name: build the new artifact
command: ansible-galaxy collection build {{ galaxy_dir }}/scratch/tmp_parent/name
args:
chdir: "{{ galaxy_dir }}"
- name: install the artifact to verify the error is handled
command: ansible-galaxy collection install '{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz'
ignore_errors: yes
register: result
- debug: msg="Actual - {{ error }}"
- debug: msg="Expected - {{ expected_error }}"
- assert:
that: error == expected_error
always:
- name: clean up collection skeleton and artifact
file:
state: absent
path: "{{ item }}"
loop:
- "{{ galaxy_dir }}/scratch/tmp_parent/"
- "{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz"
- name: setup bad tarball - {{ test_name }}
script: build_bad_tar.py {{ galaxy_dir | quote }}
- name: fail to install a collection from a bad tarball - {{ test_name }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/suspicious-test-1.0.0.tar.gz' {{ galaxy_verbosity }}
register: fail_bad_tar
failed_when: fail_bad_tar.rc != 1 and "Cannot extract tar entry '../../outside.sh' as it will be placed outside the collection directory" not in fail_bad_tar.stderr
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of failed collection install - {{ test_name }}
stat:
path: '{{ galaxy_dir }}/ansible_collections\suspicious'
register: fail_bad_tar_actual
- name: assert result of failed collection install - {{ test_name }}
assert:
that:
- not fail_bad_tar_actual.stat.exists
- name: Find artifact url for namespace4.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace4/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: install a collection from a URI - {{ test_name }}
command: ansible-galaxy collection install {{ artifact_url_response.json.download_url}} {{ galaxy_verbosity }}
register: install_uri
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a URI - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace4/name/MANIFEST.json'
register: install_uri_actual
- name: assert install a collection from a URI - {{ test_name }}
assert:
that:
- '"Installing ''namespace4.name:1.0.0'' to" in install_uri.stdout'
- (install_uri_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: fail to install a collection with an undefined URL - {{ test_name }}
command: ansible-galaxy collection install namespace5.name {{ galaxy_verbosity }}
register: fail_undefined_server
failed_when: '"No setting was provided for required configuration plugin_type: galaxy_server plugin: undefined" not in fail_undefined_server.stderr'
environment:
ANSIBLE_GALAXY_SERVER_LIST: undefined
- when: not requires_auth
block:
- name: install a collection with an empty server list - {{ test_name }}
command: ansible-galaxy collection install namespace5.name -s '{{ test_server }}' {{ galaxy_verbosity }}
register: install_empty_server_list
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_SERVER_LIST: ''
- name: get result of a collection with an empty server list - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace5/name/MANIFEST.json'
register: install_empty_server_list_actual
- name: assert install a collection with an empty server list - {{ test_name }}
assert:
that:
- '"Installing ''namespace5.name:1.0.0'' to" in install_empty_server_list.stdout'
- (install_empty_server_list_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with both roles and collections - {{ test_name }}
copy:
content: |
collections:
- namespace6.name
- name: namespace7.name
roles:
- skip.me
dest: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
- name: install roles from requirements file with collection-only keyring option
command: ansible-galaxy role install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }}
vars:
req_file: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_opt
- assert:
that:
- invalid_opt is failed
- "'unrecognized arguments: --keyring' in invalid_opt.stderr"
# Need to run with -vvv to validate the roles will be skipped msg
- name: install collections only with requirements-with-role.yml - {{ test_name }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml' -s '{{ test_name }}' -vvv
register: install_req_collection
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections only with requirements-with-roles.yml - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_collection_actual
loop_control:
loop_var: collection
loop:
- namespace6
- namespace7
- name: assert install collections only with requirements-with-role.yml - {{ test_name }}
assert:
that:
- '"contains roles which will be ignored" in install_req_collection.stdout'
- '"Installing ''namespace6.name:1.0.0'' to" in install_req_collection.stdout'
- '"Installing ''namespace7.name:1.0.0'' to" in install_req_collection.stdout'
- (install_req_collection_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_collection_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with just collections - {{ test_name }}
copy:
content: |
collections:
- namespace8.name
- name: namespace9.name
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
- name: install collections with ansible-galaxy install - {{ test_name }}
command: ansible-galaxy install -r '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}'
register: install_req
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections with ansible-galaxy install - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: assert install collections with ansible-galaxy install - {{ test_name }}
assert:
that:
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: uninstall collections for next requirements file test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: rewrite requirements file with collections and signatures
copy:
content: |
collections:
- name: namespace7.name
version: "1.0.0"
signatures:
- "{{ not_mine }}"
- "{{ also_not_mine }}"
- "file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
- namespace8.name
- name: namespace9.name
signatures:
- "file://{{ gpg_homedir }}/namespace9-name-1.0.0-MANIFEST.json.asc"
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
vars:
not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
also_not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
- name: install collection with mutually exclusive options
command: ansible-galaxy collection install -r {{ req_file }} -s {{ test_name }} {{ cli_signature }}
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
# --signature is an ansible-galaxy collection install subcommand, but mutually exclusive with -r
cli_signature: "--signature file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: mutually_exclusive_opts
- assert:
that:
- mutually_exclusive_opts is failed
- expected_error in actual_error
vars:
expected_error: >-
The --signatures option and --requirements-file are mutually exclusive.
Use the --signatures with positional collection_name args or provide a
'signatures' key for requirements in the --requirements-file.
actual_error: "{{ mutually_exclusive_opts.stderr }}"
- name: install a collection with user-supplied signatures for verification but no keyring
command: ansible-galaxy collection install namespace1.name1:1.0.0 {{ cli_signature }}
vars:
cli_signature: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: required_together
- assert:
that:
- required_together is failed
- '"ERROR! Signatures were provided to verify namespace1.name1 but no keyring was configured." in required_together.stderr'
- name: install collections with ansible-galaxy install -r with invalid signatures - {{ test_name }}
# Note that --keyring is a valid option for 'ansible-galaxy install -r ...', not just 'ansible-galaxy collection ...'
command: ansible-galaxy install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }} {{ galaxy_verbosity }}
register: install_req
ignore_errors: yes
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
- name: assert invalid signature is fatal with ansible-galaxy install - {{ test_name }}
assert:
that:
- install_req is failed
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed" in install_req.stderr'
# The other collections shouldn't be installed because they're listed
# after the failing collection and --ignore-errors was not provided
- '"Installing ''namespace8.name:1.0.0'' to" not in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" not in install_req.stdout'
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collections with ansible-galaxy install and --ignore-errors - {{ test_name }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} -vvvv
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }} --ignore-errors"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
- name: get result of install collections with ansible-galaxy install - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
# SIVEL
- name: assert invalid signature is not fatal with ansible-galaxy install --ignore-errors - {{ test_name }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." in install_stderr'
- '"Failed to install collection namespace7.name:1.0.0 but skipping due to --ignore-errors being set." in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: install collections with only one valid signature using ansible-galaxy install - {{ test_name }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }}
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections with ansible-galaxy install - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert just one valid signature is not fatal with ansible-galaxy install - {{ test_name }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: install collections with only one valid signature by ignoring the other errors
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }} --ignore-signature-status-code FAILURE
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES: BADSIG # cli option is appended and both status codes are ignored
- name: get result of install collections with ansible-galaxy install - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert invalid signature is not fatal with ansible-galaxy install - {{ test_name }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
# Uncomment once pulp container is at pulp>=0.5.0
#- name: install cache.cache at the current latest version
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' -vvv
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- set_fact:
# cache_version_build: '{{ (cache_version_build | int) + 1 }}'
#
#- name: publish update for cache.cache test
# setup_collections:
# server: galaxy_ng
# collections:
# - namespace: cache
# name: cache
# version: 1.0.{{ cache_version_build }}
#
#- name: make sure the cache version list is ignored on a collection version change - {{ test_name }}
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' --force -vvv
# register: install_cached_update
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- name: get result of cache version list is ignored on a collection version change - {{ test_name }}
# slurp:
# path: '{{ galaxy_dir }}/ansible_collections/cache/cache/MANIFEST.json'
# register: install_cached_update_actual
#
#- name: assert cache version list is ignored on a collection version change - {{ test_name }}
# assert:
# that:
# - '"Installing ''cache.cache:1.0.{{ cache_version_build }}'' to" in install_cached_update.stdout'
# - (install_cached_update_actual.content | b64decode | from_json).collection_info.version == '1.0.' ~ cache_version_build
- name: install collection with symlink - {{ test_name }}
command: ansible-galaxy collection install symlink.symlink -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_symlink
- find:
paths: '{{ galaxy_dir }}/ansible_collections/symlink/symlink'
recurse: yes
file_type: any
- name: get result of install collection with symlink - {{ test_name }}
stat:
path: '{{ galaxy_dir }}/ansible_collections/symlink/symlink/{{ path }}'
register: install_symlink_actual
loop_control:
loop_var: path
loop:
- REÅDMÈ.md-link
- docs/REÅDMÈ.md
- plugins/REÅDMÈ.md
- REÅDMÈ.md-outside-link
- docs-link
- docs-link/REÅDMÈ.md
- name: assert install collection with symlink - {{ test_name }}
assert:
that:
- '"Installing ''symlink.symlink:1.0.0'' to" in install_symlink.stdout'
- install_symlink_actual.results[0].stat.islnk
- install_symlink_actual.results[0].stat.lnk_target == 'REÅDMÈ.md'
- install_symlink_actual.results[1].stat.islnk
- install_symlink_actual.results[1].stat.lnk_target == '../REÅDMÈ.md'
- install_symlink_actual.results[2].stat.islnk
- install_symlink_actual.results[2].stat.lnk_target == '../REÅDMÈ.md'
- install_symlink_actual.results[3].stat.isreg
- install_symlink_actual.results[4].stat.islnk
- install_symlink_actual.results[4].stat.lnk_target == 'docs'
- install_symlink_actual.results[5].stat.islnk
- install_symlink_actual.results[5].stat.lnk_target == '../REÅDMÈ.md'
- name: remove install directory for the next test because parent_dep.parent_collection was installed - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
- name: install collection and dep compatible with multiple requirements - {{ test_name }}
command: ansible-galaxy collection install parent_dep.parent_collection parent_dep2.parent_collection
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert install collections with ansible-galaxy install - {{ test_name }}
assert:
that:
- '"Installing ''parent_dep.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''parent_dep2.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''child_dep.child_collection:0.5.0'' to" in install_req.stdout'
- name: install a collection to a directory that contains another collection with no metadata
block:
# Collections are usable in ansible without a galaxy.yml or MANIFEST.json
- name: create a collection directory
file:
state: directory
path: '{{ galaxy_dir }}/ansible_collections/unrelated_namespace/collection_without_metadata/plugins'
- name: install a collection to the same installation directory - {{ test_name }}
command: ansible-galaxy collection install namespace1.name1
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert installed collections with ansible-galaxy install - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_req.stdout'
- name: remove test collection install directory - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collection with signature with invalid keyring
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_option }} {{ keyring_option }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_option: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring_option: '--keyring {{ gpg_homedir }}/i_do_not_exist.kbx'
ignore_errors: yes
register: keyring_error
- assert:
that:
- keyring_error is failed
- expected_errors[0] in actual_error
- expected_errors[1] in actual_error
- expected_errors[2] in actual_error
- unexpected_warning not in actual_warning
vars:
keyring: "{{ gpg_homedir }}/i_do_not_exist.kbx"
expected_errors:
- "Signature verification failed for 'namespace1.name1' (return code 2):"
- "* The public key is not available."
- >-
* It was not possible to check the signature. This may be caused
by a missing public key or an unsupported algorithm. A RC of 4
indicates unknown algorithm, a 9 indicates a missing public key.
unexpected_warning: >-
The GnuPG keyring used for collection signature
verification was not configured but signatures were
provided by the Galaxy server to verify authenticity.
Configure a keyring for ansible-galaxy to use
or disable signature verification.
Skipping signature verification.
actual_warning: "{{ keyring_error.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
stdout_no_color: "{{ keyring_error.stdout | regex_replace(reset_color) | regex_replace(color) }}"
# Remove formatting from the reason so it's one line
actual_error: "{{ stdout_no_color | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
# TODO: Uncomment once signatures are provided by pulp-galaxy-ng
#- name: install collection with signature provided by Galaxy server (no keyring)
# command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
# ignore_errors: yes
# register: keyring_warning
#
#- name: assert a warning was given but signature verification did not occur without configuring the keyring
# assert:
# that:
# - keyring_warning is not failed
# - - '"Installing ''namespace1.name1:1.0.9'' to" in keyring_warning.stdout'
# # TODO: Don't just check the stdout, make sure the collection was installed.
# - expected_warning in actual_warning
# vars:
# expected_warning: >-
# The GnuPG keyring used for collection signature
# verification was not configured but signatures were
# provided by the Galaxy server to verify authenticity.
# Configure a keyring for ansible-galaxy to use
# or disable signature verification.
# Skipping signature verification.
# actual_warning: "{{ keyring_warning.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
# reset_color: '\x1b\[0m'
# color: '\x1b\[[0-9];[0-9]{2}m'
- name: install simple collection from first accessible server with valid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_signature
- assert:
that:
- invalid_signature is failed
- "'Not installing namespace1.name1 because GnuPG signature verification failed.' in invalid_signature.stderr"
- expected_errors[0] in install_stdout
- expected_errors[1] in install_stdout
vars:
expected_errors:
- "* This is the counterpart to SUCCESS and used to indicate a program failure."
- "* The signature with the keyid has not been verified okay."
stdout_no_color: "{{ invalid_signature.stdout | regex_replace(reset_color) | regex_replace(color) }}"
# Remove formatting from the reason so it's one line
install_stdout: "{{ stdout_no_color | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: validate collection directory was not created
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
register: collection_dir
check_mode: yes
failed_when: collection_dir is changed
- name: disable signature verification and install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }} --disable-gpg-verify"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: ignore_invalid_signature
- assert:
that:
- ignore_invalid_signature is success
- '"Installing ''namespace1.name1:1.0.9'' to" in ignore_invalid_signature.stdout'
- name: use lenient signature verification (default) without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "all"
register: missing_signature
- assert:
that:
- missing_signature is success
- missing_signature.rc == 0
- '"namespace1.name1:1.0.0 was installed successfully" in missing_signature.stdout'
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" not in missing_signature.stdout'
- name: use strict signature verification without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "+1"
ignore_errors: yes
register: missing_signature
- assert:
that:
- missing_signature is failed
- missing_signature.rc == 1
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" in missing_signature.stdout'
- '"Not installing namespace1.name1 because GnuPG signature verification failed" in missing_signature.stderr'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: download collections with pre-release dep - {{ test_name }}
command: ansible-galaxy collection download dep_with_beta.parent namespace1.name1:1.1.0-beta.1 -p '{{ galaxy_dir }}/scratch'
- name: install collection with concrete pre-release dep - {{ test_name }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/scratch/requirements.yml'
args:
chdir: '{{ galaxy_dir }}/scratch'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_concrete_pre
- name: get result of install collections with concrete pre-release dep - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/MANIFEST.json'
register: install_concrete_pre_actual
loop_control:
loop_var: collection
loop:
- namespace1/name1
- dep_with_beta/parent
- name: assert install collections with ansible-galaxy install - {{ test_name }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_concrete_pre.stdout'
- '"Installing ''dep_with_beta.parent:1.0.0'' to" in install_concrete_pre.stdout'
- (install_concrete_pre_actual.results[0].content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- (install_concrete_pre_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: remove collection dir after round of testing - {{ test_name }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,390 |
Executing kinit for Kerberos not including PATH environment variable
|
### Summary
When running winrm with Kerberos for authentication the kinit ends up not being found because the subprocess doesn't take the executing playbook's $PATH into consideration.
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L323-L329
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L386-L389
Basically I run a conda environment with kinit installed there and not on the system. So the path to kinit is something like `/home/<user>/.miniconda/env/<env_name>/bin/kinit`. There's no way of knowing which user is going to be executing this playbook so its hard to specify an absolute path to the kinit command.
### Issue Type
Bug Report
### Component Name
winrm
### Ansible Version
```console
$ ansible --version
ansible 2.9.5
config file = /path/to/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/playbook
DEFAULT_HASH_BEHAVIOUR(/playbook
DEFAULT_SCP_IF_SSH(/playbook
HOST_KEY_CHECKING(/playbook
```
### OS / Environment
Running on WSL Ubuntu 18.04
Connecting to Windows Server 2019
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: yes
```
ansible.cfg
```ini
[defaults]
host_key_checking = False
ask_pass = True
hash_behaviour = merge
[ssh_connection]
scp_if_ssh=True
```
hosts.ini
```
[all:vars]
ansible_become_method=runas
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_winrm_server_cert_validation=ignore
ansible_winrm_kerberos_delegation=true
```
### Expected Results
Kerberos authentication to happen when executing tasks
### Actual Results
```console
$> ansible-playbook -i inventories/servers/windows.ini test.yaml --user [email protected] -vvvvv
ansible-playbook 2.9.5
config file = /path_to_playbook/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible-playbook
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
Using /path_to_playbook/local.ansible.cfg as config file
SSH password:
setting up inventory plugins
host_list declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
script declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
auto declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
yaml declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
Parsed /path_to_playbook/inventories/servers/windows.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: test.yaml ******************************************************************************************************************
Positional arguments: test.yaml
verbosity: 5
ask_pass: True
remote_user: [email protected]
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/path_to_playbook/inventories/servers/windows.ini',)
forks: 5
1 plays in test.yaml
PLAY [all] ***************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
task path: /path_to_playbook/test.yaml:1
Using module file /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<<myhost>> ESTABLISH WINRM CONNECTION FOR USER: [email protected] on PORT 5986 TO <myhost>
creating Kerberos CC at /tmp/tmpeardraxc
calling kinit with subprocess for principal [email protected]
fatal: [<myhost>]: UNREACHABLE! => {
"changed": false,
"msg": "Kerberos auth failure when calling kinit cmd 'kinit': [Errno 2] No such file or directory: 'kinit': 'kinit'",
"unreachable": true
}
PLAY RECAP ***************************************************************************************************************************
<myhost> : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77390
|
https://github.com/ansible/ansible/pull/77401
|
353511a900f6216a25a25d8a36528f636428b57b
|
60b4200bc6fee69384da990bb7884f58577fc724
| 2022-03-29T19:47:48Z |
python
| 2022-03-30T00:36:56Z |
changelogs/fragments/winrm-kinit-path.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,390 |
Executing kinit for Kerberos not including PATH environment variable
|
### Summary
When running winrm with Kerberos for authentication the kinit ends up not being found because the subprocess doesn't take the executing playbook's $PATH into consideration.
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L323-L329
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L386-L389
Basically I run a conda environment with kinit installed there and not on the system. So the path to kinit is something like `/home/<user>/.miniconda/env/<env_name>/bin/kinit`. There's no way of knowing which user is going to be executing this playbook so its hard to specify an absolute path to the kinit command.
### Issue Type
Bug Report
### Component Name
winrm
### Ansible Version
```console
$ ansible --version
ansible 2.9.5
config file = /path/to/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/playbook
DEFAULT_HASH_BEHAVIOUR(/playbook
DEFAULT_SCP_IF_SSH(/playbook
HOST_KEY_CHECKING(/playbook
```
### OS / Environment
Running on WSL Ubuntu 18.04
Connecting to Windows Server 2019
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: yes
```
ansible.cfg
```ini
[defaults]
host_key_checking = False
ask_pass = True
hash_behaviour = merge
[ssh_connection]
scp_if_ssh=True
```
hosts.ini
```
[all:vars]
ansible_become_method=runas
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_winrm_server_cert_validation=ignore
ansible_winrm_kerberos_delegation=true
```
### Expected Results
Kerberos authentication to happen when executing tasks
### Actual Results
```console
$> ansible-playbook -i inventories/servers/windows.ini test.yaml --user [email protected] -vvvvv
ansible-playbook 2.9.5
config file = /path_to_playbook/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible-playbook
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
Using /path_to_playbook/local.ansible.cfg as config file
SSH password:
setting up inventory plugins
host_list declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
script declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
auto declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
yaml declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
Parsed /path_to_playbook/inventories/servers/windows.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: test.yaml ******************************************************************************************************************
Positional arguments: test.yaml
verbosity: 5
ask_pass: True
remote_user: [email protected]
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/path_to_playbook/inventories/servers/windows.ini',)
forks: 5
1 plays in test.yaml
PLAY [all] ***************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
task path: /path_to_playbook/test.yaml:1
Using module file /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<<myhost>> ESTABLISH WINRM CONNECTION FOR USER: [email protected] on PORT 5986 TO <myhost>
creating Kerberos CC at /tmp/tmpeardraxc
calling kinit with subprocess for principal [email protected]
fatal: [<myhost>]: UNREACHABLE! => {
"changed": false,
"msg": "Kerberos auth failure when calling kinit cmd 'kinit': [Errno 2] No such file or directory: 'kinit': 'kinit'",
"unreachable": true
}
PLAY RECAP ***************************************************************************************************************************
<myhost> : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77390
|
https://github.com/ansible/ansible/pull/77401
|
353511a900f6216a25a25d8a36528f636428b57b
|
60b4200bc6fee69384da990bb7884f58577fc724
| 2022-03-29T19:47:48Z |
python
| 2022-03-30T00:36:56Z |
lib/ansible/plugins/connection/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2015 Toshio Kuratomi <[email protected]>
# (c) 2017, Peter Sprygada <[email protected]>
# (c) 2017 Ansible Project
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fcntl
import os
import shlex
from abc import abstractmethod
from functools import wraps
from ansible import constants as C
from ansible.module_utils._text import to_bytes, to_text
from ansible.plugins import AnsiblePlugin
from ansible.utils.display import Display
from ansible.plugins.loader import connection_loader, get_shell_plugin
from ansible.utils.path import unfrackpath
display = Display()
__all__ = ['ConnectionBase', 'ensure_connect']
BUFSIZE = 65536
def ensure_connect(func):
@wraps(func)
def wrapped(self, *args, **kwargs):
if not self._connected:
self._connect()
return func(self, *args, **kwargs)
return wrapped
class ConnectionBase(AnsiblePlugin):
'''
A base class for connections to contain common code.
'''
has_pipelining = False
has_native_async = False # eg, winrm
always_pipeline_modules = False # eg, winrm
has_tty = True # for interacting with become plugins
# When running over this connection type, prefer modules written in a certain language
# as discovered by the specified file extension. An empty string as the
# language means any language.
module_implementation_preferences = ('',) # type: tuple[str, ...]
allow_executable = True
# the following control whether or not the connection supports the
# persistent connection framework or not
supports_persistence = False
force_persistence = False
default_user = None
def __init__(self, play_context, new_stdin, shell=None, *args, **kwargs):
super(ConnectionBase, self).__init__()
# All these hasattrs allow subclasses to override these parameters
if not hasattr(self, '_play_context'):
# Backwards compat: self._play_context isn't really needed, using set_options/get_option
self._play_context = play_context
if not hasattr(self, '_new_stdin'):
self._new_stdin = new_stdin
if not hasattr(self, '_display'):
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
if not hasattr(self, '_connected'):
self._connected = False
self.success_key = None
self.prompt = None
self._connected = False
self._socket_path = None
# helper plugins
self._shell = shell
# we always must have shell
if not self._shell:
shell_type = play_context.shell if play_context.shell else getattr(self, '_shell_type', None)
self._shell = get_shell_plugin(shell_type=shell_type, executable=self._play_context.executable)
self.become = None
def set_become_plugin(self, plugin):
self.become = plugin
@property
def connected(self):
'''Read-only property holding whether the connection to the remote host is active or closed.'''
return self._connected
@property
def socket_path(self):
'''Read-only property holding the connection socket path for this remote host'''
return self._socket_path
@staticmethod
def _split_ssh_args(argstring):
"""
Takes a string like '-o Foo=1 -o Bar="foo bar"' and returns a
list ['-o', 'Foo=1', '-o', 'Bar=foo bar'] that can be added to
the argument list. The list will not contain any empty elements.
"""
# In Python3, shlex.split doesn't work on a byte string.
return [to_text(x.strip()) for x in shlex.split(argstring) if x.strip()]
@property
@abstractmethod
def transport(self):
"""String used to identify this Connection class from other classes"""
pass
@abstractmethod
def _connect(self):
"""Connect to the host we've been initialized with"""
@ensure_connect
@abstractmethod
def exec_command(self, cmd, in_data=None, sudoable=True):
"""Run a command on the remote host.
:arg cmd: byte string containing the command
:kwarg in_data: If set, this data is passed to the command's stdin.
This is used to implement pipelining. Currently not all
connection plugins implement pipelining.
:kwarg sudoable: Tell the connection plugin if we're executing
a command via a privilege escalation mechanism. This may affect
how the connection plugin returns data. Note that not all
connections can handle privilege escalation.
:returns: a tuple of (return code, stdout, stderr) The return code is
an int while stdout and stderr are both byte strings.
When a command is executed, it goes through multiple commands to get
there. It looks approximately like this::
[LocalShell] ConnectionCommand [UsersLoginShell (*)] ANSIBLE_SHELL_EXECUTABLE [(BecomeCommand ANSIBLE_SHELL_EXECUTABLE)] Command
:LocalShell: Is optional. It is run locally to invoke the
``Connection Command``. In most instances, the
``ConnectionCommand`` can be invoked directly instead. The ssh
connection plugin which can have values that need expanding
locally specified via ssh_args is the sole known exception to
this. Shell metacharacters in the command itself should be
processed on the remote machine, not on the local machine so no
shell is needed on the local machine. (Example, ``/bin/sh``)
:ConnectionCommand: This is the command that connects us to the remote
machine to run the rest of the command. ``ansible_user``,
``ansible_ssh_host`` and so forth are fed to this piece of the
command to connect to the correct host (Examples ``ssh``,
``chroot``)
:UsersLoginShell: This shell may or may not be created depending on
the ConnectionCommand used by the connection plugin. This is the
shell that the ``ansible_user`` has configured as their login
shell. In traditional UNIX parlance, this is the last field of
a user's ``/etc/passwd`` entry We do not specifically try to run
the ``UsersLoginShell`` when we connect. Instead it is implicit
in the actions that the ``ConnectionCommand`` takes when it
connects to a remote machine. ``ansible_shell_type`` may be set
to inform ansible of differences in how the ``UsersLoginShell``
handles things like quoting if a shell has different semantics
than the Bourne shell.
:ANSIBLE_SHELL_EXECUTABLE: This is the shell set via the inventory var
``ansible_shell_executable`` or via
``constants.DEFAULT_EXECUTABLE`` if the inventory var is not set.
We explicitly invoke this shell so that we have predictable
quoting rules at this point. ``ANSIBLE_SHELL_EXECUTABLE`` is only
settable by the user because some sudo setups may only allow
invoking a specific shell. (For instance, ``/bin/bash`` may be
allowed but ``/bin/sh``, our default, may not). We invoke this
twice, once after the ``ConnectionCommand`` and once after the
``BecomeCommand``. After the ConnectionCommand, this is run by
the ``UsersLoginShell``. After the ``BecomeCommand`` we specify
that the ``ANSIBLE_SHELL_EXECUTABLE`` is being invoked directly.
:BecomeComand ANSIBLE_SHELL_EXECUTABLE: Is the command that performs
privilege escalation. Setting this up is performed by the action
plugin prior to running ``exec_command``. So we just get passed
:param:`cmd` which has the BecomeCommand already added.
(Examples: sudo, su) If we have a BecomeCommand then we will
invoke a ANSIBLE_SHELL_EXECUTABLE shell inside of it so that we
have a consistent view of quoting.
:Command: Is the command we're actually trying to run remotely.
(Examples: mkdir -p $HOME/.ansible, python $HOME/.ansible/tmp-script-file)
"""
pass
@ensure_connect
@abstractmethod
def put_file(self, in_path, out_path):
"""Transfer a file from local to remote"""
pass
@ensure_connect
@abstractmethod
def fetch_file(self, in_path, out_path):
"""Fetch a file from remote to local; callers are expected to have pre-created the directory chain for out_path"""
pass
@abstractmethod
def close(self):
"""Terminate the connection"""
pass
def connection_lock(self):
f = self._play_context.connection_lockfd
display.vvvv('CONNECTION: pid %d waiting for lock on %d' % (os.getpid(), f), host=self._play_context.remote_addr)
fcntl.lockf(f, fcntl.LOCK_EX)
display.vvvv('CONNECTION: pid %d acquired lock on %d' % (os.getpid(), f), host=self._play_context.remote_addr)
def connection_unlock(self):
f = self._play_context.connection_lockfd
fcntl.lockf(f, fcntl.LOCK_UN)
display.vvvv('CONNECTION: pid %d released lock on %d' % (os.getpid(), f), host=self._play_context.remote_addr)
def reset(self):
display.warning("Reset is not implemented for this connection")
def update_vars(self, variables):
'''
Adds 'magic' variables relating to connections to the variable dictionary provided.
In case users need to access from the play, this is a legacy from runner.
'''
for varname in C.COMMON_CONNECTION_VARS:
value = None
if varname in variables:
# dont update existing
continue
elif 'password' in varname or 'passwd' in varname:
# no secrets!
continue
elif varname == 'ansible_connection':
# its me mom!
value = self._load_name
elif varname == 'ansible_shell_type':
# its my cousin ...
value = self._shell._load_name
else:
# deal with generic options if the plugin supports em (for exmaple not all connections have a remote user)
options = C.config.get_plugin_options_from_var('connection', self._load_name, varname)
if options:
value = self.get_option(options[0]) # for these variables there should be only one option
elif 'become' not in varname:
# fallback to play_context, unles becoem related TODO: in the end should come from task/play and not pc
for prop, var_list in C.MAGIC_VARIABLE_MAPPING.items():
if varname in var_list:
try:
value = getattr(self._play_context, prop)
break
except AttributeError:
# it was not defined, fine to ignore
continue
if value is not None:
display.debug('Set connection var {0} to {1}'.format(varname, value))
variables[varname] = value
class NetworkConnectionBase(ConnectionBase):
"""
A base class for network-style connections.
"""
force_persistence = True
# Do not use _remote_is_local in other connections
_remote_is_local = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(NetworkConnectionBase, self).__init__(play_context, new_stdin, *args, **kwargs)
self._messages = []
self._conn_closed = False
self._network_os = self._play_context.network_os
self._local = connection_loader.get('local', play_context, '/dev/null')
self._local.set_options()
self._sub_plugin = {}
self._cached_variables = (None, None, None)
# reconstruct the socket_path and set instance values accordingly
self._ansible_playbook_pid = kwargs.get('ansible_playbook_pid')
self._update_connection_state()
def __getattr__(self, name):
try:
return self.__dict__[name]
except KeyError:
if not name.startswith('_'):
plugin = self._sub_plugin.get('obj')
if plugin:
method = getattr(plugin, name, None)
if method is not None:
return method
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
def exec_command(self, cmd, in_data=None, sudoable=True):
return self._local.exec_command(cmd, in_data, sudoable)
def queue_message(self, level, message):
"""
Adds a message to the queue of messages waiting to be pushed back to the controller process.
:arg level: A string which can either be the name of a method in display, or 'log'. When
the messages are returned to task_executor, a value of log will correspond to
``display.display(message, log_only=True)``, while another value will call ``display.[level](message)``
"""
self._messages.append((level, message))
def pop_messages(self):
messages, self._messages = self._messages, []
return messages
def put_file(self, in_path, out_path):
"""Transfer a file from local to remote"""
return self._local.put_file(in_path, out_path)
def fetch_file(self, in_path, out_path):
"""Fetch a file from remote to local"""
return self._local.fetch_file(in_path, out_path)
def reset(self):
'''
Reset the connection
'''
if self._socket_path:
self.queue_message('vvvv', 'resetting persistent connection for socket_path %s' % self._socket_path)
self.close()
self.queue_message('vvvv', 'reset call on connection instance')
def close(self):
self._conn_closed = True
if self._connected:
self._connected = False
def set_options(self, task_keys=None, var_options=None, direct=None):
super(NetworkConnectionBase, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
if self.get_option('persistent_log_messages'):
warning = "Persistent connection logging is enabled for %s. This will log ALL interactions" % self._play_context.remote_addr
logpath = getattr(C, 'DEFAULT_LOG_PATH')
if logpath is not None:
warning += " to %s" % logpath
self.queue_message('warning', "%s and WILL NOT redact sensitive configuration like passwords. USE WITH CAUTION!" % warning)
if self._sub_plugin.get('obj') and self._sub_plugin.get('type') != 'external':
try:
self._sub_plugin['obj'].set_options(task_keys=task_keys, var_options=var_options, direct=direct)
except AttributeError:
pass
def _update_connection_state(self):
'''
Reconstruct the connection socket_path and check if it exists
If the socket path exists then the connection is active and set
both the _socket_path value to the path and the _connected value
to True. If the socket path doesn't exist, leave the socket path
value to None and the _connected value to False
'''
ssh = connection_loader.get('ssh', class_only=True)
control_path = ssh._create_control_path(
self._play_context.remote_addr, self._play_context.port,
self._play_context.remote_user, self._play_context.connection,
self._ansible_playbook_pid
)
tmp_path = unfrackpath(C.PERSISTENT_CONTROL_PATH_DIR)
socket_path = unfrackpath(control_path % dict(directory=tmp_path))
if os.path.exists(socket_path):
self._connected = True
self._socket_path = socket_path
def _log_messages(self, message):
if self.get_option('persistent_log_messages'):
self.queue_message('log', message)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,390 |
Executing kinit for Kerberos not including PATH environment variable
|
### Summary
When running winrm with Kerberos for authentication the kinit ends up not being found because the subprocess doesn't take the executing playbook's $PATH into consideration.
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L323-L329
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L386-L389
Basically I run a conda environment with kinit installed there and not on the system. So the path to kinit is something like `/home/<user>/.miniconda/env/<env_name>/bin/kinit`. There's no way of knowing which user is going to be executing this playbook so its hard to specify an absolute path to the kinit command.
### Issue Type
Bug Report
### Component Name
winrm
### Ansible Version
```console
$ ansible --version
ansible 2.9.5
config file = /path/to/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/playbook
DEFAULT_HASH_BEHAVIOUR(/playbook
DEFAULT_SCP_IF_SSH(/playbook
HOST_KEY_CHECKING(/playbook
```
### OS / Environment
Running on WSL Ubuntu 18.04
Connecting to Windows Server 2019
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: yes
```
ansible.cfg
```ini
[defaults]
host_key_checking = False
ask_pass = True
hash_behaviour = merge
[ssh_connection]
scp_if_ssh=True
```
hosts.ini
```
[all:vars]
ansible_become_method=runas
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_winrm_server_cert_validation=ignore
ansible_winrm_kerberos_delegation=true
```
### Expected Results
Kerberos authentication to happen when executing tasks
### Actual Results
```console
$> ansible-playbook -i inventories/servers/windows.ini test.yaml --user [email protected] -vvvvv
ansible-playbook 2.9.5
config file = /path_to_playbook/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible-playbook
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
Using /path_to_playbook/local.ansible.cfg as config file
SSH password:
setting up inventory plugins
host_list declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
script declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
auto declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
yaml declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
Parsed /path_to_playbook/inventories/servers/windows.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: test.yaml ******************************************************************************************************************
Positional arguments: test.yaml
verbosity: 5
ask_pass: True
remote_user: [email protected]
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/path_to_playbook/inventories/servers/windows.ini',)
forks: 5
1 plays in test.yaml
PLAY [all] ***************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
task path: /path_to_playbook/test.yaml:1
Using module file /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<<myhost>> ESTABLISH WINRM CONNECTION FOR USER: [email protected] on PORT 5986 TO <myhost>
creating Kerberos CC at /tmp/tmpeardraxc
calling kinit with subprocess for principal [email protected]
fatal: [<myhost>]: UNREACHABLE! => {
"changed": false,
"msg": "Kerberos auth failure when calling kinit cmd 'kinit': [Errno 2] No such file or directory: 'kinit': 'kinit'",
"unreachable": true
}
PLAY RECAP ***************************************************************************************************************************
<myhost> : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77390
|
https://github.com/ansible/ansible/pull/77401
|
353511a900f6216a25a25d8a36528f636428b57b
|
60b4200bc6fee69384da990bb7884f58577fc724
| 2022-03-29T19:47:48Z |
python
| 2022-03-30T00:36:56Z |
lib/ansible/plugins/connection/winrm.py
|
# (c) 2014, Chris Church <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: winrm
short_description: Run tasks over Microsoft's WinRM
description:
- Run commands or put/fetch on a target via WinRM
- This plugin allows extra arguments to be passed that are supported by the protocol but not explicitly defined here.
They should take the form of variables declared with the following pattern C(ansible_winrm_<option>).
version_added: "2.0"
extends_documentation_fragment:
- connection_pipelining
requirements:
- pywinrm (python library)
options:
# figure out more elegant 'delegation'
remote_addr:
description:
- Address of the windows machine
default: inventory_hostname
vars:
- name: ansible_host
- name: ansible_winrm_host
type: str
remote_user:
description:
- The user to log in as to the Windows machine
vars:
- name: ansible_user
- name: ansible_winrm_user
keyword:
- name: remote_user
type: str
remote_password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_winrm_pass
- name: ansible_winrm_password
type: str
aliases:
- password # Needed for --ask-pass to come through on delegation
port:
description:
- port for winrm to connect on remote target
- The default is the https (5986) port, if using http it should be 5985
vars:
- name: ansible_port
- name: ansible_winrm_port
default: 5986
keyword:
- name: port
type: integer
scheme:
description:
- URI scheme to use
- If not set, then will default to C(https) or C(http) if I(port) is
C(5985).
choices: [http, https]
vars:
- name: ansible_winrm_scheme
type: str
path:
description: URI path to connect to
default: '/wsman'
vars:
- name: ansible_winrm_path
type: str
transport:
description:
- List of winrm transports to attempt to use (ssl, plaintext, kerberos, etc)
- If None (the default) the plugin will try to automatically guess the correct list
- The choices available depend on your version of pywinrm
type: list
elements: string
vars:
- name: ansible_winrm_transport
kerberos_command:
description: kerberos command to use to request a authentication ticket
default: kinit
vars:
- name: ansible_winrm_kinit_cmd
type: str
kinit_args:
description:
- Extra arguments to pass to C(kinit) when getting the Kerberos authentication ticket.
- By default no extra arguments are passed into C(kinit) unless I(ansible_winrm_kerberos_delegation) is also
set. In that case C(-f) is added to the C(kinit) args so a forwardable ticket is retrieved.
- If set, the args will overwrite any existing defaults for C(kinit), including C(-f) for a delegated ticket.
type: str
vars:
- name: ansible_winrm_kinit_args
version_added: '2.11'
kinit_env_vars:
description:
- A list of environment variables to pass through to C(kinit) when getting the Kerberos authentication ticket.
- By default no environment variables are passed through and C(kinit) is run with a blank slate.
- The environment variable C(KRB5CCNAME) cannot be specified here as it's used to store the temp Kerberos
ticket used by WinRM.
type: list
elements: str
default: []
ini:
- section: winrm
key: kinit_env_vars
vars:
- name: ansible_winrm_kinit_env_vars
version_added: '2.12'
kerberos_mode:
description:
- kerberos usage mode.
- The managed option means Ansible will obtain kerberos ticket.
- While the manual one means a ticket must already have been obtained by the user.
- If having issues with Ansible freezing when trying to obtain the
Kerberos ticket, you can either set this to C(manual) and obtain
it outside Ansible or install C(pexpect) through pip and try
again.
choices: [managed, manual]
vars:
- name: ansible_winrm_kinit_mode
type: str
connection_timeout:
description:
- Sets the operation and read timeout settings for the WinRM
connection.
- Corresponds to the C(operation_timeout_sec) and
C(read_timeout_sec) args in pywinrm so avoid setting these vars
with this one.
- The default value is whatever is set in the installed version of
pywinrm.
vars:
- name: ansible_winrm_connection_timeout
type: int
"""
import base64
import logging
import os
import re
import traceback
import json
import tempfile
import shlex
import subprocess
from inspect import getfullargspec
from urllib.parse import urlunsplit
HAVE_KERBEROS = False
try:
import kerberos
HAVE_KERBEROS = True
except ImportError:
pass
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.errors import AnsibleFileNotFound
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six import binary_type
from ansible.plugins.connection import ConnectionBase
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.hashing import secure_hash
from ansible.utils.display import Display
try:
import winrm
from winrm import Response
from winrm.protocol import Protocol
import requests.exceptions
HAS_WINRM = True
except ImportError as e:
HAS_WINRM = False
WINRM_IMPORT_ERR = e
try:
import xmltodict
HAS_XMLTODICT = True
except ImportError as e:
HAS_XMLTODICT = False
XMLTODICT_IMPORT_ERR = e
HAS_PEXPECT = False
try:
import pexpect
# echo was added in pexpect 3.3+ which is newer than the RHEL package
# we can only use pexpect for kerb auth if echo is a valid kwarg
# https://github.com/ansible/ansible/issues/43462
if hasattr(pexpect, 'spawn'):
argspec = getfullargspec(pexpect.spawn.__init__)
if 'echo' in argspec.args:
HAS_PEXPECT = True
except ImportError as e:
pass
# used to try and parse the hostname and detect if IPv6 is being used
try:
import ipaddress
HAS_IPADDRESS = True
except ImportError:
HAS_IPADDRESS = False
display = Display()
class Connection(ConnectionBase):
'''WinRM connections over HTTP/HTTPS.'''
transport = 'winrm'
module_implementation_preferences = ('.ps1', '.exe', '')
allow_executable = False
has_pipelining = True
allow_extras = True
def __init__(self, *args, **kwargs):
self.always_pipeline_modules = True
self.has_native_async = True
self.protocol = None
self.shell_id = None
self.delegate = None
self._shell_type = 'powershell'
super(Connection, self).__init__(*args, **kwargs)
if not C.DEFAULT_DEBUG:
logging.getLogger('requests_credssp').setLevel(logging.INFO)
logging.getLogger('requests_kerberos').setLevel(logging.INFO)
logging.getLogger('urllib3').setLevel(logging.INFO)
def _build_winrm_kwargs(self):
# this used to be in set_options, as win_reboot needs to be able to
# override the conn timeout, we need to be able to build the args
# after setting individual options. This is called by _connect before
# starting the WinRM connection
self._winrm_host = self.get_option('remote_addr')
self._winrm_user = self.get_option('remote_user')
self._winrm_pass = self.get_option('remote_password')
self._winrm_port = self.get_option('port')
self._winrm_scheme = self.get_option('scheme')
# old behaviour, scheme should default to http if not set and the port
# is 5985 otherwise https
if self._winrm_scheme is None:
self._winrm_scheme = 'http' if self._winrm_port == 5985 else 'https'
self._winrm_path = self.get_option('path')
self._kinit_cmd = self.get_option('kerberos_command')
self._winrm_transport = self.get_option('transport')
self._winrm_connection_timeout = self.get_option('connection_timeout')
if hasattr(winrm, 'FEATURE_SUPPORTED_AUTHTYPES'):
self._winrm_supported_authtypes = set(winrm.FEATURE_SUPPORTED_AUTHTYPES)
else:
# for legacy versions of pywinrm, use the values we know are supported
self._winrm_supported_authtypes = set(['plaintext', 'ssl', 'kerberos'])
# calculate transport if needed
if self._winrm_transport is None or self._winrm_transport[0] is None:
# TODO: figure out what we want to do with auto-transport selection in the face of NTLM/Kerb/CredSSP/Cert/Basic
transport_selector = ['ssl'] if self._winrm_scheme == 'https' else ['plaintext']
if HAVE_KERBEROS and ((self._winrm_user and '@' in self._winrm_user)):
self._winrm_transport = ['kerberos'] + transport_selector
else:
self._winrm_transport = transport_selector
unsupported_transports = set(self._winrm_transport).difference(self._winrm_supported_authtypes)
if unsupported_transports:
raise AnsibleError('The installed version of WinRM does not support transport(s) %s' %
to_native(list(unsupported_transports), nonstring='simplerepr'))
# if kerberos is among our transports and there's a password specified, we're managing the tickets
kinit_mode = self.get_option('kerberos_mode')
if kinit_mode is None:
# HACK: ideally, remove multi-transport stuff
self._kerb_managed = "kerberos" in self._winrm_transport and (self._winrm_pass is not None and self._winrm_pass != "")
elif kinit_mode == "managed":
self._kerb_managed = True
elif kinit_mode == "manual":
self._kerb_managed = False
# arg names we're going passing directly
internal_kwarg_mask = {'self', 'endpoint', 'transport', 'username', 'password', 'scheme', 'path', 'kinit_mode', 'kinit_cmd'}
self._winrm_kwargs = dict(username=self._winrm_user, password=self._winrm_pass)
argspec = getfullargspec(Protocol.__init__)
supported_winrm_args = set(argspec.args)
supported_winrm_args.update(internal_kwarg_mask)
passed_winrm_args = {v.replace('ansible_winrm_', '') for v in self.get_option('_extras')}
unsupported_args = passed_winrm_args.difference(supported_winrm_args)
# warn for kwargs unsupported by the installed version of pywinrm
for arg in unsupported_args:
display.warning("ansible_winrm_{0} unsupported by pywinrm (is an up-to-date version of pywinrm installed?)".format(arg))
# pass through matching extras, excluding the list we want to treat specially
for arg in passed_winrm_args.difference(internal_kwarg_mask).intersection(supported_winrm_args):
self._winrm_kwargs[arg] = self.get_option('_extras')['ansible_winrm_%s' % arg]
# Until pykerberos has enough goodies to implement a rudimentary kinit/klist, simplest way is to let each connection
# auth itself with a private CCACHE.
def _kerb_auth(self, principal, password):
if password is None:
password = ""
self._kerb_ccache = tempfile.NamedTemporaryFile()
display.vvvvv("creating Kerberos CC at %s" % self._kerb_ccache.name)
krb5ccname = "FILE:%s" % self._kerb_ccache.name
os.environ["KRB5CCNAME"] = krb5ccname
krb5env = dict(KRB5CCNAME=krb5ccname)
# Add any explicit environment vars into the krb5env block
kinit_env_vars = self.get_option('kinit_env_vars')
for var in kinit_env_vars:
if var not in krb5env and var in os.environ:
krb5env[var] = os.environ[var]
# Stores various flags to call with kinit, these could be explicit args set by 'ansible_winrm_kinit_args' OR
# '-f' if kerberos delegation is requested (ansible_winrm_kerberos_delegation).
kinit_cmdline = [self._kinit_cmd]
kinit_args = self.get_option('kinit_args')
if kinit_args:
kinit_args = [to_text(a) for a in shlex.split(kinit_args) if a.strip()]
kinit_cmdline.extend(kinit_args)
elif boolean(self.get_option('_extras').get('ansible_winrm_kerberos_delegation', False)):
kinit_cmdline.append('-f')
kinit_cmdline.append(principal)
# pexpect runs the process in its own pty so it can correctly send
# the password as input even on MacOS which blocks subprocess from
# doing so. Unfortunately it is not available on the built in Python
# so we can only use it if someone has installed it
if HAS_PEXPECT:
proc_mechanism = "pexpect"
command = kinit_cmdline.pop(0)
password = to_text(password, encoding='utf-8',
errors='surrogate_or_strict')
display.vvvv("calling kinit with pexpect for principal %s"
% principal)
try:
child = pexpect.spawn(command, kinit_cmdline, timeout=60,
env=krb5env, echo=False)
except pexpect.ExceptionPexpect as err:
err_msg = "Kerberos auth failure when calling kinit cmd " \
"'%s': %s" % (command, to_native(err))
raise AnsibleConnectionFailure(err_msg)
try:
child.expect(".*:")
child.sendline(password)
except OSError as err:
# child exited before the pass was sent, Ansible will raise
# error based on the rc below, just display the error here
display.vvvv("kinit with pexpect raised OSError: %s"
% to_native(err))
# technically this is the stdout + stderr but to match the
# subprocess error checking behaviour, we will call it stderr
stderr = child.read()
child.wait()
rc = child.exitstatus
else:
proc_mechanism = "subprocess"
password = to_bytes(password, encoding='utf-8',
errors='surrogate_or_strict')
display.vvvv("calling kinit with subprocess for principal %s"
% principal)
try:
p = subprocess.Popen(kinit_cmdline, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=krb5env)
except OSError as err:
err_msg = "Kerberos auth failure when calling kinit cmd " \
"'%s': %s" % (self._kinit_cmd, to_native(err))
raise AnsibleConnectionFailure(err_msg)
stdout, stderr = p.communicate(password + b'\n')
rc = p.returncode != 0
if rc != 0:
# one last attempt at making sure the password does not exist
# in the output
exp_msg = to_native(stderr.strip())
exp_msg = exp_msg.replace(to_native(password), "<redacted>")
err_msg = "Kerberos auth failure for principal %s with %s: %s" \
% (principal, proc_mechanism, exp_msg)
raise AnsibleConnectionFailure(err_msg)
display.vvvvv("kinit succeeded for principal %s" % principal)
def _winrm_connect(self):
'''
Establish a WinRM connection over HTTP/HTTPS.
'''
display.vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" %
(self._winrm_user, self._winrm_port, self._winrm_host), host=self._winrm_host)
winrm_host = self._winrm_host
if HAS_IPADDRESS:
display.debug("checking if winrm_host %s is an IPv6 address" % winrm_host)
try:
ipaddress.IPv6Address(winrm_host)
except ipaddress.AddressValueError:
pass
else:
winrm_host = "[%s]" % winrm_host
netloc = '%s:%d' % (winrm_host, self._winrm_port)
endpoint = urlunsplit((self._winrm_scheme, netloc, self._winrm_path, '', ''))
errors = []
for transport in self._winrm_transport:
if transport == 'kerberos':
if not HAVE_KERBEROS:
errors.append('kerberos: the python kerberos library is not installed')
continue
if self._kerb_managed:
self._kerb_auth(self._winrm_user, self._winrm_pass)
display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host)
try:
winrm_kwargs = self._winrm_kwargs.copy()
if self._winrm_connection_timeout:
winrm_kwargs['operation_timeout_sec'] = self._winrm_connection_timeout
winrm_kwargs['read_timeout_sec'] = self._winrm_connection_timeout + 1
protocol = Protocol(endpoint, transport=transport, **winrm_kwargs)
# open the shell from connect so we know we're able to talk to the server
if not self.shell_id:
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host)
return protocol
except Exception as e:
err_msg = to_text(e).strip()
if re.search(to_text(r'Operation\s+?timed\s+?out'), err_msg, re.I):
raise AnsibleError('the connection attempt timed out')
m = re.search(to_text(r'Code\s+?(\d{3})'), err_msg)
if m:
code = int(m.groups()[0])
if code == 401:
err_msg = 'the specified credentials were rejected by the server'
elif code == 411:
return protocol
errors.append(u'%s: %s' % (transport, err_msg))
display.vvvvv(u'WINRM CONNECTION ERROR: %s\n%s' % (err_msg, to_text(traceback.format_exc())), host=self._winrm_host)
if errors:
raise AnsibleConnectionFailure(', '.join(map(to_native, errors)))
else:
raise AnsibleError('No transport found for WinRM connection')
def _winrm_send_input(self, protocol, shell_id, command_id, stdin, eof=False):
rq = {'env:Envelope': protocol._get_soap_header(
resource_uri='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/cmd',
action='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/Send',
shell_id=shell_id)}
stream = rq['env:Envelope'].setdefault('env:Body', {}).setdefault('rsp:Send', {})\
.setdefault('rsp:Stream', {})
stream['@Name'] = 'stdin'
stream['@CommandId'] = command_id
stream['#text'] = base64.b64encode(to_bytes(stdin))
if eof:
stream['@End'] = 'true'
protocol.send_message(xmltodict.unparse(rq))
def _winrm_exec(self, command, args=(), from_exec=False, stdin_iterator=None):
if not self.protocol:
self.protocol = self._winrm_connect()
self._connected = True
if from_exec:
display.vvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host)
else:
display.vvvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host)
command_id = None
try:
stdin_push_failed = False
command_id = self.protocol.run_command(self.shell_id, to_bytes(command), map(to_bytes, args), console_mode_stdin=(stdin_iterator is None))
try:
if stdin_iterator:
for (data, is_last) in stdin_iterator:
self._winrm_send_input(self.protocol, self.shell_id, command_id, data, eof=is_last)
except Exception as ex:
display.warning("ERROR DURING WINRM SEND INPUT - attempting to recover: %s %s"
% (type(ex).__name__, to_text(ex)))
display.debug(traceback.format_exc())
stdin_push_failed = True
# NB: this can hang if the receiver is still running (eg, network failed a Send request but the server's still happy).
# FUTURE: Consider adding pywinrm status check/abort operations to see if the target is still running after a failure.
resptuple = self.protocol.get_command_output(self.shell_id, command_id)
# ensure stdout/stderr are text for py3
# FUTURE: this should probably be done internally by pywinrm
response = Response(tuple(to_text(v) if isinstance(v, binary_type) else v for v in resptuple))
# TODO: check result from response and set stdin_push_failed if we have nonzero
if from_exec:
display.vvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host)
else:
display.vvvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host)
display.vvvvvv('WINRM STDOUT %s' % to_text(response.std_out), host=self._winrm_host)
display.vvvvvv('WINRM STDERR %s' % to_text(response.std_err), host=self._winrm_host)
if stdin_push_failed:
# There are cases where the stdin input failed but the WinRM service still processed it. We attempt to
# see if stdout contains a valid json return value so we can ignore this error
try:
filtered_output, dummy = _filter_non_json_lines(response.std_out)
json.loads(filtered_output)
except ValueError:
# stdout does not contain a return response, stdin input was a fatal error
stderr = to_bytes(response.std_err, encoding='utf-8')
if stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
raise AnsibleError('winrm send_input failed; \nstdout: %s\nstderr %s'
% (to_native(response.std_out), to_native(stderr)))
return response
except requests.exceptions.Timeout as exc:
raise AnsibleConnectionFailure('winrm connection error: %s' % to_native(exc))
finally:
if command_id:
self.protocol.cleanup_command(self.shell_id, command_id)
def _connect(self):
if not HAS_WINRM:
raise AnsibleError("winrm or requests is not installed: %s" % to_native(WINRM_IMPORT_ERR))
elif not HAS_XMLTODICT:
raise AnsibleError("xmltodict is not installed: %s" % to_native(XMLTODICT_IMPORT_ERR))
super(Connection, self)._connect()
if not self.protocol:
self._build_winrm_kwargs() # build the kwargs from the options set
self.protocol = self._winrm_connect()
self._connected = True
return self
def reset(self):
if not self._connected:
return
self.protocol = None
self.shell_id = None
self._connect()
def _wrapper_payload_stream(self, payload, buffer_size=200000):
payload_bytes = to_bytes(payload)
byte_count = len(payload_bytes)
for i in range(0, byte_count, buffer_size):
yield payload_bytes[i:i + buffer_size], i + buffer_size >= byte_count
def exec_command(self, cmd, in_data=None, sudoable=True):
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
cmd_parts = self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False)
# TODO: display something meaningful here
display.vvv("EXEC (via pipeline wrapper)")
stdin_iterator = None
if in_data:
stdin_iterator = self._wrapper_payload_stream(in_data)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True, stdin_iterator=stdin_iterator)
result.std_out = to_bytes(result.std_out)
result.std_err = to_bytes(result.std_err)
# parse just stderr from CLIXML output
if result.std_err.startswith(b"#< CLIXML"):
try:
result.std_err = _parse_clixml(result.std_err)
except Exception:
# unsure if we're guaranteed a valid xml doc- use raw output in case of error
pass
return (result.status_code, result.std_out, result.std_err)
# FUTURE: determine buffer size at runtime via remote winrm config?
def _put_file_stdin_iterator(self, in_path, out_path, buffer_size=250000):
in_size = os.path.getsize(to_bytes(in_path, errors='surrogate_or_strict'))
offset = 0
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
for out_data in iter((lambda: in_file.read(buffer_size)), b''):
offset += len(out_data)
self._display.vvvvv('WINRM PUT "%s" to "%s" (offset=%d size=%d)' % (in_path, out_path, offset, len(out_data)), host=self._winrm_host)
# yes, we're double-encoding over the wire in this case- we want to ensure that the data shipped to the end PS pipeline is still b64-encoded
b64_data = base64.b64encode(out_data) + b'\r\n'
# cough up the data, as well as an indicator if this is the last chunk so winrm_send knows to set the End signal
yield b64_data, (in_file.tell() == in_size)
if offset == 0: # empty file, return an empty buffer + eof to close it
yield "", True
def put_file(self, in_path, out_path):
super(Connection, self).put_file(in_path, out_path)
out_path = self._shell._unquote(out_path)
display.vvv('PUT "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path))
script_template = u'''
begin {{
$path = '{0}'
$DebugPreference = "Continue"
$ErrorActionPreference = "Stop"
Set-StrictMode -Version 2
$fd = [System.IO.File]::Create($path)
$sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @() #initialize for empty file case
}}
process {{
$bytes = [System.Convert]::FromBase64String($input)
$sha1.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) | Out-Null
$fd.Write($bytes, 0, $bytes.Length)
}}
end {{
$sha1.TransformFinalBlock($bytes, 0, 0) | Out-Null
$hash = [System.BitConverter]::ToString($sha1.Hash).Replace("-", "").ToLowerInvariant()
$fd.Close()
Write-Output "{{""sha1"":""$hash""}}"
}}
'''
script = script_template.format(self._shell._escape(out_path))
cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], stdin_iterator=self._put_file_stdin_iterator(in_path, out_path))
# TODO: improve error handling
if result.status_code != 0:
raise AnsibleError(to_native(result.std_err))
try:
put_output = json.loads(result.std_out)
except ValueError:
# stdout does not contain a valid response
stderr = to_bytes(result.std_err, encoding='utf-8')
if stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
raise AnsibleError('winrm put_file failed; \nstdout: %s\nstderr %s' % (to_native(result.std_out), to_native(stderr)))
remote_sha1 = put_output.get("sha1")
if not remote_sha1:
raise AnsibleError("Remote sha1 was not returned")
local_sha1 = secure_hash(in_path)
if not remote_sha1 == local_sha1:
raise AnsibleError("Remote sha1 hash {0} does not match local hash {1}".format(to_native(remote_sha1), to_native(local_sha1)))
def fetch_file(self, in_path, out_path):
super(Connection, self).fetch_file(in_path, out_path)
in_path = self._shell._unquote(in_path)
out_path = out_path.replace('\\', '/')
# consistent with other connection plugins, we assume the caller has created the target dir
display.vvv('FETCH "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host)
buffer_size = 2**19 # 0.5MB chunks
out_file = None
try:
offset = 0
while True:
try:
script = '''
$path = '%(path)s'
If (Test-Path -Path $path -PathType Leaf)
{
$buffer_size = %(buffer_size)d
$offset = %(offset)d
$stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, [IO.FileShare]::ReadWrite)
$stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null
$buffer = New-Object -TypeName byte[] $buffer_size
$bytes_read = $stream.Read($buffer, 0, $buffer_size)
if ($bytes_read -gt 0) {
$bytes = $buffer[0..($bytes_read - 1)]
[System.Convert]::ToBase64String($bytes)
}
$stream.Close() > $null
}
ElseIf (Test-Path -Path $path -PathType Container)
{
Write-Host "[DIR]";
}
Else
{
Write-Error "$path does not exist";
Exit 1;
}
''' % dict(buffer_size=buffer_size, path=self._shell._escape(in_path), offset=offset)
display.vvvvv('WINRM FETCH "%s" to "%s" (offset=%d)' % (in_path, out_path, offset), host=self._winrm_host)
cmd_parts = self._shell._encode_script(script, as_list=True, preserve_rc=False)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:])
if result.status_code != 0:
raise IOError(to_native(result.std_err))
if result.std_out.strip() == '[DIR]':
data = None
else:
data = base64.b64decode(result.std_out.strip())
if data is None:
break
else:
if not out_file:
# If out_path is a directory and we're expecting a file, bail out now.
if os.path.isdir(to_bytes(out_path, errors='surrogate_or_strict')):
break
out_file = open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb')
out_file.write(data)
if len(data) < buffer_size:
break
offset += len(data)
except Exception:
traceback.print_exc()
raise AnsibleError('failed to transfer file to "%s"' % to_native(out_path))
finally:
if out_file:
out_file.close()
def close(self):
if self.protocol and self.shell_id:
display.vvvvv('WINRM CLOSE SHELL: %s' % self.shell_id, host=self._winrm_host)
self.protocol.close_shell(self.shell_id)
self.shell_id = None
self.protocol = None
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,390 |
Executing kinit for Kerberos not including PATH environment variable
|
### Summary
When running winrm with Kerberos for authentication the kinit ends up not being found because the subprocess doesn't take the executing playbook's $PATH into consideration.
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L323-L329
https://github.com/ansible/ansible/blob/4d984613f5e16e205434cdf7290572e62b40bf62/lib/ansible/plugins/connection/winrm.py#L386-L389
Basically I run a conda environment with kinit installed there and not on the system. So the path to kinit is something like `/home/<user>/.miniconda/env/<env_name>/bin/kinit`. There's no way of knowing which user is going to be executing this playbook so its hard to specify an absolute path to the kinit command.
### Issue Type
Bug Report
### Component Name
winrm
### Ansible Version
```console
$ ansible --version
ansible 2.9.5
config file = /path/to/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_ASK_PASS(/playbook
DEFAULT_HASH_BEHAVIOUR(/playbook
DEFAULT_SCP_IF_SSH(/playbook
HOST_KEY_CHECKING(/playbook
```
### OS / Environment
Running on WSL Ubuntu 18.04
Connecting to Windows Server 2019
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
gather_facts: yes
```
ansible.cfg
```ini
[defaults]
host_key_checking = False
ask_pass = True
hash_behaviour = merge
[ssh_connection]
scp_if_ssh=True
```
hosts.ini
```
[all:vars]
ansible_become_method=runas
ansible_connection=winrm
ansible_winrm_transport=kerberos
ansible_winrm_server_cert_validation=ignore
ansible_winrm_kerberos_delegation=true
```
### Expected Results
Kerberos authentication to happen when executing tasks
### Actual Results
```console
$> ansible-playbook -i inventories/servers/windows.ini test.yaml --user [email protected] -vvvvv
ansible-playbook 2.9.5
config file = /path_to_playbook/local.ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/<user>/miniconda3/envs/ansible/bin/ansible-playbook
python version = 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29) [GCC 7.3.0]
Using /path_to_playbook/local.ansible.cfg as config file
SSH password:
setting up inventory plugins
host_list declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
script declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
auto declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
yaml declined parsing /path_to_playbook/inventories/servers/windows.ini as it did not pass its verify_file() method
Parsed /path_to_playbook/inventories/servers/windows.ini inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: test.yaml ******************************************************************************************************************
Positional arguments: test.yaml
verbosity: 5
ask_pass: True
remote_user: [email protected]
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/path_to_playbook/inventories/servers/windows.ini',)
forks: 5
1 plays in test.yaml
PLAY [all] ***************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
task path: /path_to_playbook/test.yaml:1
Using module file /home/<user>/miniconda3/envs/ansible/lib/python3.6/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<<myhost>> ESTABLISH WINRM CONNECTION FOR USER: [email protected] on PORT 5986 TO <myhost>
creating Kerberos CC at /tmp/tmpeardraxc
calling kinit with subprocess for principal [email protected]
fatal: [<myhost>]: UNREACHABLE! => {
"changed": false,
"msg": "Kerberos auth failure when calling kinit cmd 'kinit': [Errno 2] No such file or directory: 'kinit': 'kinit'",
"unreachable": true
}
PLAY RECAP ***************************************************************************************************************************
<myhost> : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77390
|
https://github.com/ansible/ansible/pull/77401
|
353511a900f6216a25a25d8a36528f636428b57b
|
60b4200bc6fee69384da990bb7884f58577fc724
| 2022-03-29T19:47:48Z |
python
| 2022-03-30T00:36:56Z |
test/units/plugins/connection/test_winrm.py
|
# -*- coding: utf-8 -*-
# (c) 2018, Jordan Borean <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from io import StringIO
from mock import MagicMock
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import connection_loader
from ansible.plugins.connection import winrm
pytest.importorskip("winrm")
class TestConnectionWinRM(object):
OPTIONS_DATA = (
# default options
(
{'_extras': {}},
{},
{
'_kerb_managed': False,
'_kinit_cmd': 'kinit',
'_winrm_connection_timeout': None,
'_winrm_host': 'inventory_hostname',
'_winrm_kwargs': {'username': None, 'password': None},
'_winrm_pass': None,
'_winrm_path': '/wsman',
'_winrm_port': 5986,
'_winrm_scheme': 'https',
'_winrm_transport': ['ssl'],
'_winrm_user': None
},
False
),
# http through port
(
{'_extras': {}, 'ansible_port': 5985},
{},
{
'_winrm_kwargs': {'username': None, 'password': None},
'_winrm_port': 5985,
'_winrm_scheme': 'http',
'_winrm_transport': ['plaintext'],
},
False
),
# kerberos user with kerb present
(
{'_extras': {}, 'ansible_user': '[email protected]'},
{},
{
'_kerb_managed': False,
'_kinit_cmd': 'kinit',
'_winrm_kwargs': {'username': '[email protected]',
'password': None},
'_winrm_pass': None,
'_winrm_transport': ['kerberos', 'ssl'],
'_winrm_user': '[email protected]'
},
True
),
# kerberos user without kerb present
(
{'_extras': {}, 'ansible_user': '[email protected]'},
{},
{
'_kerb_managed': False,
'_kinit_cmd': 'kinit',
'_winrm_kwargs': {'username': '[email protected]',
'password': None},
'_winrm_pass': None,
'_winrm_transport': ['ssl'],
'_winrm_user': '[email protected]'
},
False
),
# kerberos user with managed ticket (implicit)
(
{'_extras': {}, 'ansible_user': '[email protected]'},
{'remote_password': 'pass'},
{
'_kerb_managed': True,
'_kinit_cmd': 'kinit',
'_winrm_kwargs': {'username': '[email protected]',
'password': 'pass'},
'_winrm_pass': 'pass',
'_winrm_transport': ['kerberos', 'ssl'],
'_winrm_user': '[email protected]'
},
True
),
# kerb with managed ticket (explicit)
(
{'_extras': {}, 'ansible_user': '[email protected]',
'ansible_winrm_kinit_mode': 'managed'},
{'password': 'pass'},
{
'_kerb_managed': True,
},
True
),
# kerb with unmanaged ticket (explicit))
(
{'_extras': {}, 'ansible_user': '[email protected]',
'ansible_winrm_kinit_mode': 'manual'},
{'password': 'pass'},
{
'_kerb_managed': False,
},
True
),
# transport override (single)
(
{'_extras': {}, 'ansible_user': '[email protected]',
'ansible_winrm_transport': 'ntlm'},
{},
{
'_winrm_kwargs': {'username': '[email protected]',
'password': None},
'_winrm_pass': None,
'_winrm_transport': ['ntlm'],
},
False
),
# transport override (list)
(
{'_extras': {}, 'ansible_user': '[email protected]',
'ansible_winrm_transport': ['ntlm', 'certificate']},
{},
{
'_winrm_kwargs': {'username': '[email protected]',
'password': None},
'_winrm_pass': None,
'_winrm_transport': ['ntlm', 'certificate'],
},
False
),
# winrm extras
(
{'_extras': {'ansible_winrm_server_cert_validation': 'ignore',
'ansible_winrm_service': 'WSMAN'}},
{},
{
'_winrm_kwargs': {'username': None, 'password': None,
'server_cert_validation': 'ignore',
'service': 'WSMAN'},
},
False
),
# direct override
(
{'_extras': {}, 'ansible_winrm_connection_timeout': 5},
{'connection_timeout': 10},
{
'_winrm_connection_timeout': 10,
},
False
),
# password as ansible_password
(
{'_extras': {}, 'ansible_password': 'pass'},
{},
{
'_winrm_pass': 'pass',
'_winrm_kwargs': {'username': None, 'password': 'pass'}
},
False
),
# password as ansible_winrm_pass
(
{'_extras': {}, 'ansible_winrm_pass': 'pass'},
{},
{
'_winrm_pass': 'pass',
'_winrm_kwargs': {'username': None, 'password': 'pass'}
},
False
),
# password as ansible_winrm_password
(
{'_extras': {}, 'ansible_winrm_password': 'pass'},
{},
{
'_winrm_pass': 'pass',
'_winrm_kwargs': {'username': None, 'password': 'pass'}
},
False
),
)
# pylint bug: https://github.com/PyCQA/pylint/issues/511
# pylint: disable=undefined-variable
@pytest.mark.parametrize('options, direct, expected, kerb',
((o, d, e, k) for o, d, e, k in OPTIONS_DATA))
def test_set_options(self, options, direct, expected, kerb):
winrm.HAVE_KERBEROS = kerb
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options=options, direct=direct)
conn._build_winrm_kwargs()
for attr, expected in expected.items():
actual = getattr(conn, attr)
assert actual == expected, \
"winrm attr '%s', actual '%s' != expected '%s'"\
% (attr, actual, expected)
class TestWinRMKerbAuth(object):
@pytest.mark.parametrize('options, expected', [
[{"_extras": {}},
(["kinit", "user@domain"],)],
[{"_extras": {}, 'ansible_winrm_kinit_cmd': 'kinit2'},
(["kinit2", "user@domain"],)],
[{"_extras": {'ansible_winrm_kerberos_delegation': True}},
(["kinit", "-f", "user@domain"],)],
[{"_extras": {}, 'ansible_winrm_kinit_args': '-f -p'},
(["kinit", "-f", "-p", "user@domain"],)],
[{"_extras": {}, 'ansible_winrm_kerberos_delegation': True, 'ansible_winrm_kinit_args': '-p'},
(["kinit", "-p", "user@domain"],)]
])
def test_kinit_success_subprocess(self, monkeypatch, options, expected):
def mock_communicate(input=None, timeout=None):
return b"", b""
mock_popen = MagicMock()
mock_popen.return_value.communicate = mock_communicate
mock_popen.return_value.returncode = 0
monkeypatch.setattr("subprocess.Popen", mock_popen)
winrm.HAS_PEXPECT = False
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options=options)
conn._build_winrm_kwargs()
conn._kerb_auth("user@domain", "pass")
mock_calls = mock_popen.mock_calls
assert len(mock_calls) == 1
assert mock_calls[0][1] == expected
actual_env = mock_calls[0][2]['env']
assert list(actual_env.keys()) == ['KRB5CCNAME']
assert actual_env['KRB5CCNAME'].startswith("FILE:/")
@pytest.mark.parametrize('options, expected', [
[{"_extras": {}},
("kinit", ["user@domain"],)],
[{"_extras": {}, 'ansible_winrm_kinit_cmd': 'kinit2'},
("kinit2", ["user@domain"],)],
[{"_extras": {'ansible_winrm_kerberos_delegation': True}},
("kinit", ["-f", "user@domain"],)],
[{"_extras": {}, 'ansible_winrm_kinit_args': '-f -p'},
("kinit", ["-f", "-p", "user@domain"],)],
[{"_extras": {}, 'ansible_winrm_kerberos_delegation': True, 'ansible_winrm_kinit_args': '-p'},
("kinit", ["-p", "user@domain"],)]
])
def test_kinit_success_pexpect(self, monkeypatch, options, expected):
pytest.importorskip("pexpect")
mock_pexpect = MagicMock()
mock_pexpect.return_value.exitstatus = 0
monkeypatch.setattr("pexpect.spawn", mock_pexpect)
winrm.HAS_PEXPECT = True
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options=options)
conn._build_winrm_kwargs()
conn._kerb_auth("user@domain", "pass")
mock_calls = mock_pexpect.mock_calls
assert mock_calls[0][1] == expected
actual_env = mock_calls[0][2]['env']
assert list(actual_env.keys()) == ['KRB5CCNAME']
assert actual_env['KRB5CCNAME'].startswith("FILE:/")
assert mock_calls[0][2]['echo'] is False
assert mock_calls[1][0] == "().expect"
assert mock_calls[1][1] == (".*:",)
assert mock_calls[2][0] == "().sendline"
assert mock_calls[2][1] == ("pass",)
assert mock_calls[3][0] == "().read"
assert mock_calls[4][0] == "().wait"
def test_kinit_with_missing_executable_subprocess(self, monkeypatch):
expected_err = "[Errno 2] No such file or directory: " \
"'/fake/kinit': '/fake/kinit'"
mock_popen = MagicMock(side_effect=OSError(expected_err))
monkeypatch.setattr("subprocess.Popen", mock_popen)
winrm.HAS_PEXPECT = False
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
options = {"_extras": {}, "ansible_winrm_kinit_cmd": "/fake/kinit"}
conn.set_options(var_options=options)
conn._build_winrm_kwargs()
with pytest.raises(AnsibleConnectionFailure) as err:
conn._kerb_auth("user@domain", "pass")
assert str(err.value) == "Kerberos auth failure when calling " \
"kinit cmd '/fake/kinit': %s" % expected_err
def test_kinit_with_missing_executable_pexpect(self, monkeypatch):
pexpect = pytest.importorskip("pexpect")
expected_err = "The command was not found or was not " \
"executable: /fake/kinit"
mock_pexpect = \
MagicMock(side_effect=pexpect.ExceptionPexpect(expected_err))
monkeypatch.setattr("pexpect.spawn", mock_pexpect)
winrm.HAS_PEXPECT = True
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
options = {"_extras": {}, "ansible_winrm_kinit_cmd": "/fake/kinit"}
conn.set_options(var_options=options)
conn._build_winrm_kwargs()
with pytest.raises(AnsibleConnectionFailure) as err:
conn._kerb_auth("user@domain", "pass")
assert str(err.value) == "Kerberos auth failure when calling " \
"kinit cmd '/fake/kinit': %s" % expected_err
def test_kinit_error_subprocess(self, monkeypatch):
expected_err = "kinit: krb5_parse_name: " \
"Configuration file does not specify default realm"
def mock_communicate(input=None, timeout=None):
return b"", to_bytes(expected_err)
mock_popen = MagicMock()
mock_popen.return_value.communicate = mock_communicate
mock_popen.return_value.returncode = 1
monkeypatch.setattr("subprocess.Popen", mock_popen)
winrm.HAS_PEXPECT = False
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options={"_extras": {}})
conn._build_winrm_kwargs()
with pytest.raises(AnsibleConnectionFailure) as err:
conn._kerb_auth("invaliduser", "pass")
assert str(err.value) == \
"Kerberos auth failure for principal invaliduser with " \
"subprocess: %s" % (expected_err)
def test_kinit_error_pexpect(self, monkeypatch):
pytest.importorskip("pexpect")
expected_err = "Configuration file does not specify default realm"
mock_pexpect = MagicMock()
mock_pexpect.return_value.expect = MagicMock(side_effect=OSError)
mock_pexpect.return_value.read.return_value = to_bytes(expected_err)
mock_pexpect.return_value.exitstatus = 1
monkeypatch.setattr("pexpect.spawn", mock_pexpect)
winrm.HAS_PEXPECT = True
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options={"_extras": {}})
conn._build_winrm_kwargs()
with pytest.raises(AnsibleConnectionFailure) as err:
conn._kerb_auth("invaliduser", "pass")
assert str(err.value) == \
"Kerberos auth failure for principal invaliduser with " \
"pexpect: %s" % (expected_err)
def test_kinit_error_pass_in_output_subprocess(self, monkeypatch):
def mock_communicate(input=None, timeout=None):
return b"", b"Error with kinit\n" + input
mock_popen = MagicMock()
mock_popen.return_value.communicate = mock_communicate
mock_popen.return_value.returncode = 1
monkeypatch.setattr("subprocess.Popen", mock_popen)
winrm.HAS_PEXPECT = False
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options={"_extras": {}})
conn._build_winrm_kwargs()
with pytest.raises(AnsibleConnectionFailure) as err:
conn._kerb_auth("username", "password")
assert str(err.value) == \
"Kerberos auth failure for principal username with subprocess: " \
"Error with kinit\n<redacted>"
def test_kinit_error_pass_in_output_pexpect(self, monkeypatch):
pytest.importorskip("pexpect")
mock_pexpect = MagicMock()
mock_pexpect.return_value.expect = MagicMock()
mock_pexpect.return_value.read.return_value = \
b"Error with kinit\npassword\n"
mock_pexpect.return_value.exitstatus = 1
monkeypatch.setattr("pexpect.spawn", mock_pexpect)
winrm.HAS_PEXPECT = True
pc = PlayContext()
pc = PlayContext()
new_stdin = StringIO()
conn = connection_loader.get('winrm', pc, new_stdin)
conn.set_options(var_options={"_extras": {}})
conn._build_winrm_kwargs()
with pytest.raises(AnsibleConnectionFailure) as err:
conn._kerb_auth("username", "password")
assert str(err.value) == \
"Kerberos auth failure for principal username with pexpect: " \
"Error with kinit\n<redacted>"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,393 |
Remove deprecated ALLOW_WORLD_READABLE_TMPFILES
|
### Summary
The config option `ALLOW_WORLD_READABLE_TMPFILES` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.14.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77393
|
https://github.com/ansible/ansible/pull/77410
|
60b4200bc6fee69384da990bb7884f58577fc724
|
e080bae766fdf98a659cc96a0160ca17f6b4c2cd
| 2022-03-29T21:34:01Z |
python
| 2022-03-30T13:46:57Z |
changelogs/fragments/77393-remove-allow_world_readable_tmpfiles.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,393 |
Remove deprecated ALLOW_WORLD_READABLE_TMPFILES
|
### Summary
The config option `ALLOW_WORLD_READABLE_TMPFILES` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.14.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77393
|
https://github.com/ansible/ansible/pull/77410
|
60b4200bc6fee69384da990bb7884f58577fc724
|
e080bae766fdf98a659cc96a0160ca17f6b4c2cd
| 2022-03-29T21:34:01Z |
python
| 2022-03-30T13:46:57Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
description:
- This setting has been moved to the individual shell plugins as a plugin option :ref:`shell_plugins`.
- The existing configuration settings are still accepted with the shell plugin adding additional options, like variables.
- This message will be removed in 2.14.
type: boolean
default: False
deprecated: # (kept for autodetection and removal, deprecation is irrelevant since w/o settings this can never show runtime msg)
why: moved to shell plugins
version: "2.14"
alternatives: 'world_readable_tmp'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- python2.6
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,393 |
Remove deprecated ALLOW_WORLD_READABLE_TMPFILES
|
### Summary
The config option `ALLOW_WORLD_READABLE_TMPFILES` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.14.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77393
|
https://github.com/ansible/ansible/pull/77410
|
60b4200bc6fee69384da990bb7884f58577fc724
|
e080bae766fdf98a659cc96a0160ca17f6b4c2cd
| 2022-03-29T21:34:01Z |
python
| 2022-03-30T13:46:57Z |
lib/ansible/plugins/action/__init__.py
|
# coding: utf-8
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import random
import re
import shlex
import stat
import tempfile
from abc import ABC, abstractmethod
from collections.abc import Sequence
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail, AnsibleAuthenticationFailure
from ansible.executor.module_common import modify_module
from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.module_utils.errors import UnsupportedError
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.six import binary_type, string_types, text_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.utils.jsonify import jsonify
from ansible.release import __version__
from ansible.utils.collection_loader import resource_from_fqcr
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText
from ansible.vars.clean import remove_internal_keys
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
class ActionBase(ABC):
'''
This class is the base class for all action plugins, and defines
code common to all actions. The base class handles the connection
by putting/getting files and executing commands based on the current
action in use.
'''
# A set of valid arguments
_VALID_ARGS = frozenset([]) # type: frozenset[str]
def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj):
self._task = task
self._connection = connection
self._play_context = play_context
self._loader = loader
self._templar = templar
self._shared_loader_obj = shared_loader_obj
self._cleanup_remote_tmp = False
self._supports_check_mode = True
self._supports_async = False
# interpreter discovery state
self._discovered_interpreter_key = None
self._discovered_interpreter = False
self._discovery_deprecation_warnings = []
self._discovery_warnings = []
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
self._used_interpreter = None
@abstractmethod
def run(self, tmp=None, task_vars=None):
""" Action Plugins should implement this method to perform their
tasks. Everything else in this base class is a helper method for the
action plugin to do that.
:kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls
another one and wants to use the same remote tmp for both should set
self._connection._shell.tmpdir rather than this parameter.
:kwarg task_vars: The variables (host vars, group vars, config vars,
etc) associated with this task.
:returns: dictionary of results from the module
Implementors of action modules may find the following variables especially useful:
* Module parameters. These are stored in self._task.args
"""
# does not default to {'changed': False, 'failed': False}, as it breaks async
result = {}
if tmp is not None:
result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir']
del tmp
if self._task.async_val and not self._supports_async:
raise AnsibleActionFail('async is not supported for this task.')
elif self._play_context.check_mode and not self._supports_check_mode:
raise AnsibleActionSkip('check mode is not supported for this task.')
elif self._task.async_val and self._play_context.check_mode:
raise AnsibleActionFail('check mode and async cannot be used on same task.')
# Error if invalid argument is passed
if self._VALID_ARGS:
task_opts = frozenset(self._task.args.keys())
bad_opts = task_opts.difference(self._VALID_ARGS)
if bad_opts:
raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts))))
if self._connection._shell.tmpdir is None and self._early_needs_tmp_path():
self._make_tmp_path()
return result
def validate_argument_spec(self, argument_spec=None,
mutually_exclusive=None,
required_together=None,
required_one_of=None,
required_if=None,
required_by=None,
):
"""Validate an argument spec against the task args
This will return a tuple of (ValidationResult, dict) where the dict
is the validated, coerced, and normalized task args.
Be cautious when directly passing ``new_module_args`` directly to a
module invocation, as it will contain the defaults, and not only
the args supplied from the task. If you do this, the module
should not define ``mututally_exclusive`` or similar.
This code is roughly copied from the ``validate_argument_spec``
action plugin for use by other action plugins.
"""
new_module_args = self._task.args.copy()
validator = ArgumentSpecValidator(
argument_spec,
mutually_exclusive=mutually_exclusive,
required_together=required_together,
required_one_of=required_one_of,
required_if=required_if,
required_by=required_by,
)
validation_result = validator.validate(new_module_args)
new_module_args.update(validation_result.validated_parameters)
try:
error = validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = f"Unsupported parameters for ({self._load_name}) module: {msg}"
raise AnsibleActionFail(msg)
return validation_result, new_module_args
def cleanup(self, force=False):
"""Method to perform a clean up at the end of an action plugin execution
By default this is designed to clean up the shell tmpdir, and is toggled based on whether
async is in use
Action plugins may override this if they deem necessary, but should still call this method
via super
"""
if force or not self._task.async_val:
self._remove_tmp_path(self._connection._shell.tmpdir)
def get_plugin_option(self, plugin, option, default=None):
"""Helper to get an option from a plugin without having to use
the try/except dance everywhere to set a default
"""
try:
return plugin.get_option(option)
except (AttributeError, KeyError):
return default
def get_become_option(self, option, default=None):
return self.get_plugin_option(self._connection.become, option, default=default)
def get_connection_option(self, option, default=None):
return self.get_plugin_option(self._connection, option, default=default)
def get_shell_option(self, option, default=None):
return self.get_plugin_option(self._connection._shell, option, default=default)
def _remote_file_exists(self, path):
cmd = self._connection._shell.exists(path)
result = self._low_level_execute_command(cmd=cmd, sudoable=True)
if result['rc'] == 0:
return True
return False
def _configure_module(self, module_name, module_args, task_vars):
'''
Handles the loading and templating of the module code through the
modify_module() function.
'''
if self._task.delegate_to:
use_vars = task_vars.get('ansible_delegated_vars')[self._task.delegate_to]
else:
use_vars = task_vars
split_module_name = module_name.split('.')
collection_name = '.'.join(split_module_name[0:2]) if len(split_module_name) > 2 else ''
leaf_module_name = resource_from_fqcr(module_name)
# Search module path(s) for named module.
for mod_type in self._connection.module_implementation_preferences:
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
if mod_type == '.ps1':
# FIXME: This should be temporary and moved to an exec subsystem plugin where we can define the mapping
# for each subsystem.
win_collection = 'ansible.windows'
rewrite_collection_names = ['ansible.builtin', 'ansible.legacy', '']
# async_status, win_stat, win_file, win_copy, and win_ping are not just like their
# python counterparts but they are compatible enough for our
# internal usage
# NB: we only rewrite the module if it's not being called by the user (eg, an action calling something else)
# and if it's unqualified or FQ to a builtin
if leaf_module_name in ('stat', 'file', 'copy', 'ping') and \
collection_name in rewrite_collection_names and self._task.action != module_name:
module_name = '%s.win_%s' % (win_collection, leaf_module_name)
elif leaf_module_name == 'async_status' and collection_name in rewrite_collection_names:
module_name = '%s.%s' % (win_collection, leaf_module_name)
# TODO: move this tweak down to the modules, not extensible here
# Remove extra quotes surrounding path parameters before sending to module.
if leaf_module_name in ['win_stat', 'win_file', 'win_copy', 'slurp'] and module_args and \
hasattr(self._connection._shell, '_unquote'):
for key in ('src', 'dest', 'path'):
if key in module_args:
module_args[key] = self._connection._shell._unquote(module_args[key])
result = self._shared_loader_obj.module_loader.find_plugin_with_context(module_name, mod_type, collection_list=self._task.collections)
if not result.resolved:
if result.redirect_list and len(result.redirect_list) > 1:
# take the last one in the redirect list, we may have successfully jumped through N other redirects
target_module_name = result.redirect_list[-1]
raise AnsibleError("The module {0} was redirected to {1}, which could not be loaded.".format(module_name, target_module_name))
module_path = result.plugin_resolved_path
if module_path:
break
else: # This is a for-else: http://bit.ly/1ElPkyg
raise AnsibleError("The module %s was not found in configured module paths" % (module_name))
# insert shared code and arguments into the module
final_environment = dict()
self._compute_environment_string(final_environment)
become_kwargs = {}
if self._connection.become:
become_kwargs['become'] = True
become_kwargs['become_method'] = self._connection.become.name
become_kwargs['become_user'] = self._connection.become.get_option('become_user',
playcontext=self._play_context)
become_kwargs['become_password'] = self._connection.become.get_option('become_pass',
playcontext=self._play_context)
become_kwargs['become_flags'] = self._connection.become.get_option('become_flags',
playcontext=self._play_context)
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
for dummy in (1, 2):
try:
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar,
task_vars=use_vars,
module_compression=self._play_context.module_compression,
async_timeout=self._task.async_val,
environment=final_environment,
remote_is_local=bool(getattr(self._connection, '_remote_is_local', False)),
**become_kwargs)
break
except InterpreterDiscoveryRequiredError as idre:
self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter(
action=self,
interpreter_name=idre.interpreter_name,
discovery_mode=idre.discovery_mode,
task_vars=use_vars))
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name
# update the local vars copy for the retry
use_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# TODO: this condition prevents 'wrong host' from being updated
# but in future we would want to be able to update 'delegated host facts'
# irrespective of task settings
if not self._task.delegate_to or self._task.delegate_facts:
# store in local task_vars facts collection for the retry and any other usages in this worker
task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# preserve this so _execute_module can propagate back to controller as a fact
self._discovered_interpreter_key = discovered_key
else:
task_vars['ansible_delegated_vars'][self._task.delegate_to]['ansible_facts'][discovered_key] = self._discovered_interpreter
return (module_style, module_shebang, module_data, module_path)
def _compute_environment_string(self, raw_environment_out=None):
'''
Builds the environment string to be used when executing the remote task.
'''
final_environment = dict()
if self._task.environment is not None:
environments = self._task.environment
if not isinstance(environments, list):
environments = [environments]
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
for environment in environments:
if environment is None or len(environment) == 0:
continue
temp_environment = self._templar.template(environment)
if not isinstance(temp_environment, dict):
raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment)))
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
final_environment.update(temp_environment)
if len(final_environment) > 0:
final_environment = self._templar.template(final_environment)
if isinstance(raw_environment_out, dict):
raw_environment_out.clear()
raw_environment_out.update(final_environment)
return self._connection._shell.env_prefix(**final_environment)
def _early_needs_tmp_path(self):
'''
Determines if a tmp path should be created before the action is executed.
'''
return getattr(self, 'TRANSFERS_FILES', False)
def _is_pipelining_enabled(self, module_style, wrap_async=False):
'''
Determines if we are required and can do pipelining
'''
try:
is_enabled = self._connection.get_option('pipelining')
except (KeyError, AttributeError, ValueError):
is_enabled = self._play_context.pipelining
# winrm supports async pipeline
# TODO: make other class property 'has_async_pipelining' to separate cases
always_pipeline = self._connection.always_pipeline_modules
# su does not work with pipelining
# TODO: add has_pipelining class prop to become plugins
become_exception = (self._connection.become.name if self._connection.become else '') != 'su'
# any of these require a true
conditions = [
self._connection.has_pipelining, # connection class supports it
is_enabled or always_pipeline, # enabled via config or forced via connection (eg winrm)
module_style == "new", # old style modules do not support pipelining
not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files
not wrap_async or always_pipeline, # async does not normally support pipelining unless it does (eg winrm)
become_exception,
]
return all(conditions)
def _get_admin_users(self):
'''
Returns a list of admin users that are configured for the current shell
plugin
'''
return self.get_shell_option('admin_users', ['root'])
def _get_remote_user(self):
''' consistently get the 'remote_user' for the action plugin '''
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
remote_user = None
try:
remote_user = self._connection.get_option('remote_user')
except KeyError:
# plugin does not have remote_user option, fallback to default and/play_context
remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user
except AttributeError:
# plugin does not use config system, fallback to old play_context
remote_user = self._play_context.remote_user
return remote_user
def _is_become_unprivileged(self):
'''
The user is not the same as the connection user and is not part of the
shell configured admin users
'''
# if we don't use become then we know we aren't switching to a
# different unprivileged user
if not self._connection.become:
return False
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
admin_users = self._get_admin_users()
remote_user = self._get_remote_user()
become_user = self.get_become_option('become_user')
return bool(become_user and become_user not in admin_users + [remote_user])
def _make_tmp_path(self, remote_user=None):
'''
Create and return a temporary path on a remote box.
'''
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
if getattr(self._connection, '_remote_is_local', False):
tmpdir = C.DEFAULT_LOCAL_TMP
else:
# NOTE: shell plugins should populate this setting anyways, but they dont do remote expansion, which
# we need for 'non posix' systems like cloud-init and solaris
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
become_unprivileged = self._is_become_unprivileged()
basefile = self._connection._shell._generate_temp_dir_name()
cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir)
result = self._low_level_execute_command(cmd, sudoable=False)
# error handling on this seems a little aggressive?
if result['rc'] != 0:
if result['rc'] == 5:
output = 'Authentication failure.'
elif result['rc'] == 255 and self._connection.transport in ('ssh',):
if self._play_context.verbosity > 3:
output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr'])
else:
output = (u'SSH encountered an unknown error during the connection. '
'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue')
elif u'No space left on device' in result['stderr']:
output = result['stderr']
else:
output = ('Failed to create temporary directory. '
'In some cases, you may have been able to authenticate and did not have permissions on the target directory. '
'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. '
'Failed command was: %s, exited with result %d' % (cmd, result['rc']))
if 'stdout' in result and result['stdout'] != u'':
output = output + u", stdout output: %s" % result['stdout']
if self._play_context.verbosity > 3 and 'stderr' in result and result['stderr'] != u'':
output += u", stderr output: %s" % result['stderr']
raise AnsibleConnectionFailure(output)
else:
self._cleanup_remote_tmp = True
try:
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
except IndexError:
# stdout was empty or just space, set to / to trigger error in next if
rc = '/'
# Catch failure conditions, files should never be
# written to locations in /.
if rc == '/':
raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd))
self._connection._shell.tmpdir = rc
return rc
def _should_remove_tmp_path(self, tmp_path):
'''Determine if temporary path should be deleted or kept by user request/config'''
return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path
def _remove_tmp_path(self, tmp_path, force=False):
'''Remove a temporary path we created. '''
if tmp_path is None and self._connection._shell.tmpdir:
tmp_path = self._connection._shell.tmpdir
if force or self._should_remove_tmp_path(tmp_path):
cmd = self._connection._shell.remove(tmp_path, recurse=True)
# If we have gotten here we have a working connection configuration.
# If the connection breaks we could leave tmp directories out on the remote system.
tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False)
if tmp_rm_res.get('rc', 0) != 0:
display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})'
% (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))
else:
self._connection._shell.tmpdir = None
def _transfer_file(self, local_path, remote_path):
"""
Copy a file from the controller to a remote path
:arg local_path: Path on controller to transfer
:arg remote_path: Path on the remote system to transfer into
.. warning::
* When you use this function you likely want to use use fixup_perms2() on the
remote_path to make sure that the remote file is readable when the user becomes
a non-privileged user.
* If you use fixup_perms2() on the file and copy or move the file into place, you will
need to then remove filesystem acls on the file once it has been copied into place by
the module. See how the copy module implements this for help.
"""
self._connection.put_file(local_path, remote_path)
return remote_path
def _transfer_data(self, remote_path, data):
'''
Copies the module data out to the temporary module path.
'''
if isinstance(data, dict):
data = jsonify(data)
afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
afo = os.fdopen(afd, 'wb')
try:
data = to_bytes(data, errors='surrogate_or_strict')
afo.write(data)
except Exception as e:
raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e))
afo.flush()
afo.close()
try:
self._transfer_file(afile, remote_path)
finally:
os.unlink(afile)
return remote_path
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
We need the files we upload to be readable (and sometimes executable)
by the user being sudo'd to but we want to limit other people's access
(because the files could contain passwords or other private
information. We achieve this in one of these ways:
* If no sudo is performed or the remote_user is sudo'ing to
themselves, we don't have to change permissions.
* If the remote_user sudo's to a privileged user (for instance, root),
we don't have to change permissions
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is
privileged or the remote systems allows chown calls by unprivileged
users (e.g. HP-UX)
* If the above fails, we next try 'chmod +a' which is a macOS way of
setting ACLs on files.
* If the above fails, we check if ansible_common_remote_group is set.
If it is, we attempt to chgrp the file to its value. This is useful
if the remote_user has a group in common with the become_user. As the
remote_user, we can chgrp the file to that group and allow the
become_user to read it.
* If (the chown fails AND ansible_common_remote_group is not set) OR
(ansible_common_remote_group is set AND the chgrp (or following chmod)
returned non-zero), we can set the file to be world readable so that
the second unprivileged user can read the file.
Since this could allow other users to get access to private
information we only do this if ansible is configured with
"allow_world_readable_tmpfiles" in the ansible.cfg. Also note that
when ansible_common_remote_group is set this final fallback is very
unlikely to ever be triggered, so long as chgrp was successful. But
just because the chgrp was successful, does not mean Ansible can
necessarily access the files (if, for example, the variable was set
to a group that remote_user is in, and can chgrp to, but does not have
in common with become_user).
"""
if remote_user is None:
remote_user = self._get_remote_user()
# Step 1: Are we on windows?
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# This won't work on Powershell as-is, so we'll just completely
# skip until we have a need for it, at which point we'll have to do
# something different.
return remote_paths
# Step 2: If we're not becoming an unprivileged user, we are roughly
# done. Make the files +x if we're asked to, and return.
if not self._is_become_unprivileged():
if execute:
# Can't depend on the file being transferred with execute permissions.
# Only need user perms because no become was used here
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError(
'Failed to set execute bit on remote files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
return remote_paths
# If we're still here, we have an unprivileged user that's different
# than the ssh user.
become_user = self.get_become_option('become_user')
# Try to use file system acls to make the files readable for sudo'd
# user
if execute:
chmod_mode = 'rx'
setfacl_mode = 'r-x'
# Apple patches their "file_cmds" chmod with ACL support
chmod_acl_mode = '{0} allow read,execute'.format(become_user)
# POSIX-draft ACL specification. Solaris, maybe others.
# See chmod(1) on something Solaris-based for syntax details.
posix_acl_mode = 'A+user:{0}:rx:allow'.format(become_user)
else:
chmod_mode = 'rX'
# TODO: this form fails silently on freebsd. We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
setfacl_mode = 'r-X'
# Apple
chmod_acl_mode = '{0} allow read'.format(become_user)
# POSIX-draft
posix_acl_mode = 'A+user:{0}:r:allow'.format(become_user)
# Step 3a: Are we able to use setfacl to add user ACLs to the file?
res = self._remote_set_user_facl(
remote_paths,
become_user,
setfacl_mode)
if res['rc'] == 0:
return remote_paths
# Step 3b: Set execute if we need to. We do this before anything else
# because some of the methods below might work but not let us set +x
# as part of them.
if execute:
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError(
'Failed to set file mode on remote temporary files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
# Step 3c: File system ACLs failed above; try falling back to chown.
res = self._remote_chown(remote_paths, become_user)
if res['rc'] == 0:
return remote_paths
# Check if we are an admin/root user. If we are and got here, it means
# we failed to chown as root and something weird has happened.
if remote_user in self._get_admin_users():
raise AnsibleError(
'Failed to change ownership of the temporary files Ansible '
'needs to create despite connecting as a privileged user. '
'Unprivileged become user would be unable to read the '
'file.')
# Step 3d: Try macOS's special chmod + ACL
# macOS chmod's +a flag takes its own argument. As a slight hack, we
# pass that argument as the first element of remote_paths. So we end
# up running `chmod +a [that argument] [file 1] [file 2] ...`
try:
res = self._remote_chmod([chmod_acl_mode] + list(remote_paths), '+a')
except AnsibleAuthenticationFailure as e:
# Solaris-based chmod will return 5 when it sees an invalid mode,
# and +a is invalid there. Because it returns 5, which is the same
# thing sshpass returns on auth failure, our sshpass code will
# assume that auth failed. If we don't handle that case here, none
# of the other logic below will get run. This is fairly hacky and a
# corner case, but probably one that shows up pretty often in
# Solaris-based environments (and possibly others).
pass
else:
if res['rc'] == 0:
return remote_paths
# Step 3e: Try Solaris/OpenSolaris/OpenIndiana-sans-setfacl chmod
# Similar to macOS above, Solaris 11.4 drops setfacl and takes file ACLs
# via chmod instead. OpenSolaris and illumos-based distros allow for
# using either setfacl or chmod, and compatibility depends on filesystem.
# It should be possible to debug this branch by installing OpenIndiana
# (use ZFS) and going unpriv -> unpriv.
res = self._remote_chmod(remote_paths, posix_acl_mode)
if res['rc'] == 0:
return remote_paths
# we'll need this down here
become_link = get_versioned_doclink('user_guide/become.html')
# Step 3f: Common group
# Otherwise, we're a normal user. We failed to chown the paths to the
# unprivileged user, but if we have a common group with them, we should
# be able to chown it to that.
#
# Note that we have no way of knowing if this will actually work... just
# because chgrp exits successfully does not mean that Ansible will work.
# We could check if the become user is in the group, but this would
# create an extra round trip.
#
# Also note that due to the above, this can prevent the
# ALLOW_WORLD_READABLE_TMPFILES logic below from ever getting called. We
# leave this up to the user to rectify if they have both of these
# features enabled.
group = self.get_shell_option('common_remote_group')
if group is not None:
res = self._remote_chgrp(remote_paths, group)
if res['rc'] == 0:
# warn user that something might go weirdly here.
if self.get_shell_option('world_readable_temp'):
display.warning(
'Both common_remote_group and '
'allow_world_readable_tmpfiles are set. chgrp was '
'successful, but there is no guarantee that Ansible '
'will be able to read the files after this operation, '
'particularly if common_remote_group was set to a '
'group of which the unprivileged become user is not a '
'member. In this situation, '
'allow_world_readable_tmpfiles is a no-op. See this '
'URL for more details: %s'
'#risks-of-becoming-an-unprivileged-user' % become_link)
if execute:
group_mode = 'g+rwx'
else:
group_mode = 'g+rw'
res = self._remote_chmod(remote_paths, group_mode)
if res['rc'] == 0:
return remote_paths
# Step 4: World-readable temp directory
if self.get_shell_option('world_readable_temp'):
# chown and fs acls failed -- do things this insecure way only if
# the user opted in in the config file
display.warning(
'Using world-readable permissions for temporary files Ansible '
'needs to create when becoming an unprivileged user. This may '
'be insecure. For information on securing this, see %s'
'#risks-of-becoming-an-unprivileged-user' % become_link)
res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode)
if res['rc'] == 0:
return remote_paths
raise AnsibleError(
'Failed to set file mode on remote files '
'(rc: {0}, err: {1})'.format(
res['rc'],
to_native(res['stderr'])))
raise AnsibleError(
'Failed to set permissions on the temporary files Ansible needs '
'to create when becoming an unprivileged user '
'(rc: %s, err: %s}). For information on working around this, see %s'
'#risks-of-becoming-an-unprivileged-user' % (
res['rc'],
to_native(res['stderr']), become_link))
def _remote_chmod(self, paths, mode, sudoable=False):
'''
Issue a remote chmod command
'''
cmd = self._connection._shell.chmod(paths, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chown(self, paths, user, sudoable=False):
'''
Issue a remote chown command
'''
cmd = self._connection._shell.chown(paths, user)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chgrp(self, paths, group, sudoable=False):
'''
Issue a remote chgrp command
'''
cmd = self._connection._shell.chgrp(paths, group)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_set_user_facl(self, paths, user, mode, sudoable=False):
'''
Issue a remote call to setfacl
'''
cmd = self._connection._shell.set_user_facl(paths, user, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True):
'''
Get information from remote file.
'''
if tmp is not None:
display.warning('_execute_remote_stat no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir')
del tmp # No longer used
module_args = dict(
path=path,
follow=follow,
get_checksum=checksum,
checksum_algorithm='sha1',
)
mystat = self._execute_module(module_name='ansible.legacy.stat', module_args=module_args, task_vars=all_vars,
wrap_async=False)
if mystat.get('failed'):
msg = mystat.get('module_stderr')
if not msg:
msg = mystat.get('module_stdout')
if not msg:
msg = mystat.get('msg')
raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg))
if not mystat['stat']['exists']:
# empty might be matched, 1 should never match, also backwards compatible
mystat['stat']['checksum'] = '1'
# happens sometimes when it is a dir and not on bsd
if 'checksum' not in mystat['stat']:
mystat['stat']['checksum'] = ''
elif not isinstance(mystat['stat']['checksum'], string_types):
raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum']))
return mystat['stat']
def _remote_checksum(self, path, all_vars, follow=False):
"""Deprecated. Use _execute_remote_stat() instead.
Produces a remote checksum given a path,
Returns a number 0-4 for specific errors instead of checksum, also ensures it is different
0 = unknown error
1 = file does not exist, this might not be an error
2 = permissions issue
3 = its a directory, not a file
4 = stat module failed, likely due to not finding python
5 = appropriate json module not found
"""
self._display.deprecated("The '_remote_checksum()' method is deprecated. "
"The plugin author should update the code to use '_execute_remote_stat()' instead", "2.16")
x = "0" # unknown error has occurred
try:
remote_stat = self._execute_remote_stat(path, all_vars, follow=follow)
if remote_stat['exists'] and remote_stat['isdir']:
x = "3" # its a directory not a file
else:
x = remote_stat['checksum'] # if 1, file is missing
except AnsibleError as e:
errormsg = to_text(e)
if errormsg.endswith(u'Permission denied'):
x = "2" # cannot read file
elif errormsg.endswith(u'MODULE FAILURE'):
x = "4" # python not found or module uncaught exception
elif 'json' in errormsg:
x = "5" # json module needed
finally:
return x # pylint: disable=lost-exception
def _remote_expand_user(self, path, sudoable=True, pathsep=None):
''' takes a remote path and performs tilde/$HOME expansion on the remote host '''
# We only expand ~/path and ~username/path
if not path.startswith('~'):
return path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
become_user = self.get_become_option('become_user')
if getattr(self._connection, '_remote_is_local', False):
pass
elif sudoable and self._connection.become and become_user:
expand_path = '~%s' % become_user
else:
# use remote user instead, if none set default to current user
expand_path = '~%s' % (self._get_remote_user() or '')
# use shell to construct appropriate command and execute
cmd = self._connection._shell.expand_user(expand_path)
data = self._low_level_execute_command(cmd, sudoable=False)
try:
initial_fragment = data['stdout'].strip().splitlines()[-1]
except IndexError:
initial_fragment = None
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
cmd = self._connection._shell.pwd()
pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip()
if pwd:
expanded = pwd
else:
expanded = path
elif len(split_path) > 1:
expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:])
else:
expanded = initial_fragment
if '..' in os.path.dirname(expanded).split('/'):
raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._play_context.remote_addr)
return expanded
def _strip_success_message(self, data):
'''
Removes the BECOME-SUCCESS message from the data.
'''
if data.strip().startswith('BECOME-SUCCESS-'):
data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data)
return data
def _update_module_args(self, module_name, module_args, task_vars):
# set check mode in the module arguments, if required
if self._play_context.check_mode:
if not self._supports_check_mode:
raise AnsibleError("check mode is not supported for this operation")
module_args['_ansible_check_mode'] = True
else:
module_args['_ansible_check_mode'] = False
# set no log in the module arguments, if required
no_target_syslog = C.config.get_config_value('DEFAULT_NO_TARGET_SYSLOG', variables=task_vars)
module_args['_ansible_no_log'] = self._play_context.no_log or no_target_syslog
# set debug in the module arguments, if required
module_args['_ansible_debug'] = C.DEFAULT_DEBUG
# let module know we are in diff mode
module_args['_ansible_diff'] = self._play_context.diff
# let module know our verbosity
module_args['_ansible_verbosity'] = display.verbosity
# give the module information about the ansible version
module_args['_ansible_version'] = __version__
# give the module information about its name
module_args['_ansible_module_name'] = module_name
# set the syslog facility to be used in the module
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
# let module know about filesystems that selinux treats specially
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
# what to do when parameter values are converted to strings
module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION
# give the module the socket for persistent connections
module_args['_ansible_socket'] = getattr(self._connection, 'socket_path')
if not module_args['_ansible_socket']:
module_args['_ansible_socket'] = task_vars.get('ansible_socket')
# make sure all commands use the designated shell executable
module_args['_ansible_shell_executable'] = self._play_context.executable
# make sure modules are aware if they need to keep the remote files
module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES
# make sure all commands use the designated temporary directory if created
if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# make sure the remote_tmp value is sent through in case modules needs to create their own
module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False):
'''
Transfer and run a module along with its arguments.
'''
if tmp is not None:
display.warning('_execute_module no longer honors the tmp parameter. Action plugins'
' should set self._connection._shell.tmpdir to share the tmpdir')
del tmp # No longer used
if delete_remote_tmp is not None:
display.warning('_execute_module no longer honors the delete_remote_tmp parameter.'
' Action plugins should check self._connection._shell.tmpdir to'
' see if a tmpdir existed before they were called to determine'
' if they are responsible for removing it.')
del delete_remote_tmp # No longer used
tmpdir = self._connection._shell.tmpdir
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
if task_vars is None:
task_vars = dict()
# if a module name was not specified for this execution, use the action from the task
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
self._update_module_args(module_name, module_args, task_vars)
remove_async_dir = None
if wrap_async or self._task.async_val:
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
remove_async_dir = len(self._task.environment)
self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir})
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
display.vvv("Using module file %s" % module_path)
if not shebang and module_style != 'binary':
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
self._used_interpreter = shebang
remote_module_path = None
if not self._is_pipelining_enabled(module_style, wrap_async):
# we might need remote tmp dir
if tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):
# we'll also need a tmp file to hold our module arguments
args_file_path = self._connection._shell.join_path(tmpdir, 'args')
if remote_module_path or module_style != 'new':
display.debug("transferring module to remote %s" % remote_module_path)
if module_style == 'binary':
self._transfer_file(module_path, remote_module_path)
else:
self._transfer_data(remote_module_path, module_data)
if module_style == 'old':
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
args_data = ""
for k, v in module_args.items():
args_data += '%s=%s ' % (k, shlex.quote(text_type(v)))
self._transfer_data(args_file_path, args_data)
elif module_style in ('non_native_want_json', 'binary'):
self._transfer_data(args_file_path, json.dumps(module_args))
display.debug("done transferring module to remote")
environment_string = self._compute_environment_string()
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task.
if remove_async_dir is not None:
del self._task.environment[remove_async_dir]
remote_files = []
if tmpdir and remote_module_path:
remote_files = [tmpdir, remote_module_path]
if args_file_path:
remote_files.append(args_file_path)
sudoable = True
in_data = None
cmd = ""
if wrap_async and not self._connection.always_pipeline_modules:
# configure, upload, and chmod the async_wrapper module
(async_module_style, shebang, async_module_data, async_module_path) = self._configure_module(
module_name='ansible.legacy.async_wrapper', module_args=dict(), task_vars=task_vars)
async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path)
remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename)
self._transfer_data(remote_async_module_path, async_module_data)
remote_files.append(remote_async_module_path)
async_limit = self._task.async_val
async_jid = str(random.randint(0, 999999999999))
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
interpreter = shebang.replace('#!', '').strip()
async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path]
if environment_string:
async_cmd.insert(0, environment_string)
if args_file_path:
async_cmd.append(args_file_path)
else:
# maintain a fixed number of positional parameters for async_wrapper
async_cmd.append('_')
if not self._should_remove_tmp_path(tmpdir):
async_cmd.append("-preserve_tmp")
cmd = " ".join(to_text(x) for x in async_cmd)
else:
if self._is_pipelining_enabled(module_style):
in_data = module_data
display.vvv("Pipelining is enabled.")
else:
cmd = remote_module_path
cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip()
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
if remote_files:
# remove none/empty
remote_files = [x for x in remote_files if x]
self._fixup_perms2(remote_files, self._get_remote_user())
# actually execute
res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)
# parse the main result
data = self._parse_returned_data(res)
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
if data.pop("_ansible_suppress_tmpdir_delete", False):
self._cleanup_remote_tmp = False
# NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)):
data['ansible_module_results'] = data['results']
del data['results']
display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.")
# remove internal keys
remove_internal_keys(data)
if wrap_async:
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
self._connection._shell.tmpdir = None
# FIXME: for backwards compat, figure out if still makes sense
data['changed'] = True
# pre-split stdout/stderr into lines if needed
if 'stdout' in data and 'stdout_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stdout', None) or u''
data['stdout_lines'] = txt.splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stderr', None) or u''
data['stderr_lines'] = txt.splitlines()
# propagate interpreter discovery results back to the controller
if self._discovered_interpreter_key:
if data.get('ansible_facts') is None:
data['ansible_facts'] = {}
data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter
if self._discovery_warnings:
if data.get('warnings') is None:
data['warnings'] = []
data['warnings'].extend(self._discovery_warnings)
if self._discovery_deprecation_warnings:
if data.get('deprecations') is None:
data['deprecations'] = []
data['deprecations'].extend(self._discovery_deprecation_warnings)
# mark the entire module results untrusted as a template right here, since the current action could
# possibly template one of these values.
data = wrap_var(data)
display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
return data
def _parse_returned_data(self, res):
try:
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''), objects_only=True)
for w in warnings:
display.warning(w)
data = json.loads(filtered_output)
data['_ansible_parsed'] = True
except ValueError:
# not valid json, lets try to capture error
data = dict(failed=True, _ansible_parsed=False)
data['module_stdout'] = res.get('stdout', u'')
if 'stderr' in res:
data['module_stderr'] = res['stderr']
if res['stderr'].startswith(u'Traceback'):
data['exception'] = res['stderr']
# in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt
if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'):
data['exception'] = data['module_stdout']
# The default
data['msg'] = "MODULE FAILURE"
# try to figure out if we are missing interpreter
if self._used_interpreter is not None:
interpreter = re.escape(self._used_interpreter.lstrip('!#'))
match = re.compile('%s: (?:No such file or directory|not found)' % interpreter)
if match.search(data['module_stderr']) or match.search(data['module_stdout']):
data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter."
# always append hint
data['msg'] += '\nSee stdout/stderr for the exact error'
if 'rc' in res:
data['rc'] = res['rc']
return data
# FIXME: move to connection base
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None):
'''
This is the function which executes the low level shell command, which
may be commands to create/remove directories for temporary files, or to
run the module code or python directly when pipelining.
:kwarg encoding_errors: If the value returned by the command isn't
utf-8 then we have to figure out how to transform it to unicode.
If the value is just going to be displayed to the user (or
discarded) then the default of 'replace' is fine. If the data is
used as a key or is going to be written back out to a file
verbatim, then this won't work. May have to use some sort of
replacement strategy (python3 could use surrogateescape)
:kwarg chdir: cd into this directory before executing the command.
'''
display.debug("_low_level_execute_command(): starting")
# if not cmd:
# # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
# display.debug("_low_level_execute_command(): no command, exiting")
# return dict(stdout='', stderr='', rc=254)
if chdir:
display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir)
cmd = self._connection._shell.append_command('cd %s' % chdir, cmd)
# https://github.com/ansible/ansible/issues/68054
if executable:
self._connection._shell.executable = executable
ruser = self._get_remote_user()
buser = self.get_become_option('become_user')
if (sudoable and self._connection.become and # if sudoable and have become
resource_from_fqcr(self._connection.transport) != 'network_cli' and # if not using network_cli
(C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set
display.debug("_low_level_execute_command(): using become for this command")
cmd = self._connection.become.build_become_command(cmd, self._connection._shell)
if self._connection.allow_executable:
if executable is None:
executable = self._play_context.executable
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
cmd = self._connection._shell.append_command(cmd, 'sleep 0')
if executable:
cmd = executable + ' -c ' + shlex.quote(cmd)
display.debug("_low_level_execute_command(): executing: %s" % (cmd,))
# Change directory to basedir of task for command execution when connection is local
if self._connection.transport == 'local':
self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict')
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
if isinstance(stdout, binary_type):
out = to_text(stdout, errors=encoding_errors)
elif not isinstance(stdout, text_type):
out = to_text(b''.join(stdout.readlines()), errors=encoding_errors)
else:
out = stdout
if isinstance(stderr, binary_type):
err = to_text(stderr, errors=encoding_errors)
elif not isinstance(stderr, text_type):
err = to_text(b''.join(stderr.readlines()), errors=encoding_errors)
else:
err = stderr
if rc is None:
rc = 0
# be sure to remove the BECOME-SUCCESS message now
out = self._strip_success_message(out)
display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err))
return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines())
def _get_diff_data(self, destination, source, task_vars, source_file=True):
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate. To fix this, we'd need
# to move the diffing from the callback plugins into here.
#
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'�' ; diff['after'] = u'�' When the callback plugin later
# diffs before and after it shows an empty diff.
diff = {}
display.debug("Going to peek to see if file has changed permissions")
peek_result = self._execute_module(
module_name='ansible.legacy.file', module_args=dict(path=destination, _diff_peek=True),
task_vars=task_vars, persist_files=True)
if peek_result.get('failed', False):
display.warning(u"Failed to get diff between '%s' and '%s': %s" % (os.path.basename(source), destination, to_text(peek_result.get(u'msg', u''))))
return diff
if peek_result.get('rc', 0) == 0:
if peek_result.get('state') in (None, 'absent'):
diff['before'] = u''
elif peek_result.get('appears_binary'):
diff['dst_binary'] = 1
elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug(u"Slurping the file %s" % source)
dest_result = self._execute_module(
module_name='ansible.legacy.slurp', module_args=dict(path=destination),
task_vars=task_vars, persist_files=True)
if 'content' in dest_result:
dest_contents = dest_result['content']
if dest_result['encoding'] == u'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result))
diff['before_header'] = destination
diff['before'] = to_text(dest_contents)
if source_file:
st = os.stat(source)
if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug("Reading local copy of the file %s" % source)
try:
with open(source, 'rb') as src:
src_contents = src.read()
except Exception as e:
raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e)))
if b"\x00" in src_contents:
diff['src_binary'] = 1
else:
diff['after_header'] = source
diff['after'] = to_text(src_contents)
else:
display.debug(u"source of file passed in")
diff['after_header'] = u'dynamically generated'
diff['after'] = source
if self._play_context.no_log:
if 'before' in diff:
diff["before"] = u""
if 'after' in diff:
diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n"
return diff
def _find_needle(self, dirname, needle):
'''
find a needle in haystack of paths, optionally using 'dirname' as a subdir.
This will build the ordered list of paths to search and pass them to dwim
to get back the first existing file found.
'''
# dwim already deals with playbook basedirs
path_stack = self._task.get_search_path()
# if missing it will return a file not found exception
return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,393 |
Remove deprecated ALLOW_WORLD_READABLE_TMPFILES
|
### Summary
The config option `ALLOW_WORLD_READABLE_TMPFILES` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.14.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77393
|
https://github.com/ansible/ansible/pull/77410
|
60b4200bc6fee69384da990bb7884f58577fc724
|
e080bae766fdf98a659cc96a0160ca17f6b4c2cd
| 2022-03-29T21:34:01Z |
python
| 2022-03-30T13:46:57Z |
test/units/plugins/action/test_action.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Florian Apolloner <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
from ansible import constants as C
from units.compat import unittest
from mock import patch, MagicMock, mock_open
from ansible.errors import AnsibleError, AnsibleAuthenticationFailure
from ansible.module_utils.six import text_type
from ansible.module_utils.six.moves import shlex_quote, builtins
from ansible.module_utils._text import to_bytes
from ansible.playbook.play_context import PlayContext
from ansible.plugins.action import ActionBase
from ansible.template import Templar
from ansible.vars.clean import clean_facts
from units.mock.loader import DictDataLoader
python_module_replacers = br"""
#!/usr/bin/python
#ANSIBLE_VERSION = "<<ANSIBLE_VERSION>>"
#MODULE_COMPLEX_ARGS = "<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"
#SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>"
test = u'Toshio \u304f\u3089\u3068\u307f'
from ansible.module_utils.basic import *
"""
powershell_module_replacers = b"""
WINDOWS_ARGS = "<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"
# POWERSHELL_COMMON
"""
def _action_base():
fake_loader = DictDataLoader({
})
mock_module_loader = MagicMock()
mock_shared_loader_obj = MagicMock()
mock_shared_loader_obj.module_loader = mock_module_loader
mock_connection_loader = MagicMock()
mock_shared_loader_obj.connection_loader = mock_connection_loader
mock_connection = MagicMock()
play_context = MagicMock()
action_base = DerivedActionBase(task=None,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=None,
shared_loader_obj=mock_shared_loader_obj)
return action_base
class DerivedActionBase(ActionBase):
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
# We're not testing the plugin run() method, just the helper
# methods ActionBase defines
return super(DerivedActionBase, self).run(tmp=tmp, task_vars=task_vars)
class TestActionBase(unittest.TestCase):
def test_action_base_run(self):
mock_task = MagicMock()
mock_task.action = "foo"
mock_task.args = dict(a=1, b=2, c=3)
mock_connection = MagicMock()
play_context = PlayContext()
mock_task.async_val = None
action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
results = action_base.run()
self.assertEqual(results, dict())
mock_task.async_val = 0
action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
results = action_base.run()
self.assertEqual(results, {})
def test_action_base__configure_module(self):
fake_loader = DictDataLoader({
})
# create our fake task
mock_task = MagicMock()
mock_task.action = "copy"
mock_task.async_val = 0
mock_task.delegate_to = None
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# create a mock shared loader object
def mock_find_plugin_with_context(name, options, collection_list=None):
mockctx = MagicMock()
if name == 'badmodule':
mockctx.resolved = False
mockctx.plugin_resolved_path = None
elif '.ps1' in options:
mockctx.resolved = True
mockctx.plugin_resolved_path = '/fake/path/to/%s.ps1' % name
else:
mockctx.resolved = True
mockctx.plugin_resolved_path = '/fake/path/to/%s' % name
return mockctx
mock_module_loader = MagicMock()
mock_module_loader.find_plugin_with_context.side_effect = mock_find_plugin_with_context
mock_shared_obj_loader = MagicMock()
mock_shared_obj_loader.module_loader = mock_module_loader
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=Templar(loader=fake_loader),
shared_loader_obj=mock_shared_obj_loader,
)
# test python module formatting
with patch.object(builtins, 'open', mock_open(read_data=to_bytes(python_module_replacers.strip(), encoding='utf-8'))):
with patch.object(os, 'rename'):
mock_task.args = dict(a=1, foo='fö〩')
mock_connection.module_implementation_preferences = ('',)
(style, shebang, data, path) = action_base._configure_module(mock_task.action, mock_task.args,
task_vars=dict(ansible_python_interpreter='/usr/bin/python',
ansible_playbook_python='/usr/bin/python'))
self.assertEqual(style, "new")
self.assertEqual(shebang, u"#!/usr/bin/python")
# test module not found
self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args, {})
# test powershell module formatting
with patch.object(builtins, 'open', mock_open(read_data=to_bytes(powershell_module_replacers.strip(), encoding='utf-8'))):
mock_task.action = 'win_copy'
mock_task.args = dict(b=2)
mock_connection.module_implementation_preferences = ('.ps1',)
(style, shebang, data, path) = action_base._configure_module('stat', mock_task.args, {})
self.assertEqual(style, "new")
self.assertEqual(shebang, u'#!powershell')
# test module not found
self.assertRaises(AnsibleError, action_base._configure_module, 'badmodule', mock_task.args, {})
def test_action_base__compute_environment_string(self):
fake_loader = DictDataLoader({
})
# create our fake task
mock_task = MagicMock()
mock_task.action = "copy"
mock_task.args = dict(a=1)
# create a mock connection, so we don't actually try and connect to things
def env_prefix(**args):
return ' '.join(['%s=%s' % (k, shlex_quote(text_type(v))) for k, v in args.items()])
mock_connection = MagicMock()
mock_connection._shell.env_prefix.side_effect = env_prefix
# we're using a real play context here
play_context = PlayContext()
# and we're using a real templar here too
templar = Templar(loader=fake_loader)
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=fake_loader,
templar=templar,
shared_loader_obj=None,
)
# test standard environment setup
mock_task.environment = [dict(FOO='foo'), None]
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=foo")
# test where environment is not a list
mock_task.environment = dict(FOO='foo')
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=foo")
# test environment with a variable in it
templar.available_variables = dict(the_var='bar')
mock_task.environment = [dict(FOO='{{the_var}}')]
env_string = action_base._compute_environment_string()
self.assertEqual(env_string, "FOO=bar")
# test with a bad environment set
mock_task.environment = dict(FOO='foo')
mock_task.environment = ['hi there']
self.assertRaises(AnsibleError, action_base._compute_environment_string)
def test_action_base__early_needs_tmp_path(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
self.assertFalse(action_base._early_needs_tmp_path())
action_base.TRANSFERS_FILES = True
self.assertTrue(action_base._early_needs_tmp_path())
def test_action_base__make_tmp_path(self):
# create our fake task
mock_task = MagicMock()
def get_shell_opt(opt):
ret = None
if opt == 'admin_users':
ret = ['root', 'toor', 'Administrator']
elif opt == 'remote_tmp':
ret = '~/.ansible/tmp'
return ret
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection.transport = 'ssh'
mock_connection._shell.mkdtemp.return_value = 'mkdir command'
mock_connection._shell.join_path.side_effect = os.path.join
mock_connection._shell.get_option = get_shell_opt
mock_connection._shell.HOMES_RE = re.compile(r'(\'|\")?(~|\$HOME)(.*)')
# we're using a real play context here
play_context = PlayContext()
play_context.become = True
play_context.become_user = 'foo'
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
action_base._low_level_execute_command.return_value = dict(rc=0, stdout='/some/path')
self.assertEqual(action_base._make_tmp_path('root'), '/some/path/')
# empty path fails
action_base._low_level_execute_command.return_value = dict(rc=0, stdout='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# authentication failure
action_base._low_level_execute_command.return_value = dict(rc=5, stdout='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# ssh error
action_base._low_level_execute_command.return_value = dict(rc=255, stdout='', stderr='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
play_context.verbosity = 5
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
# general error
action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
action_base._low_level_execute_command.return_value = dict(rc=1, stdout='some stuff here', stderr='No space left on device')
self.assertRaises(AnsibleError, action_base._make_tmp_path, 'root')
def test_action_base__fixup_perms2(self):
mock_task = MagicMock()
mock_connection = MagicMock()
play_context = PlayContext()
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
remote_paths = ['/tmp/foo/bar.txt', '/tmp/baz.txt']
remote_user = 'remoteuser1'
# Used for skipping down to common group dir.
CHMOD_ACL_FLAGS = ('+a', 'A+user:remoteuser2:r:allow')
def runWithNoExpectation(execute=False):
return action_base._fixup_perms2(
remote_paths,
remote_user=remote_user,
execute=execute)
def assertSuccess(execute=False):
self.assertEqual(runWithNoExpectation(execute), remote_paths)
def assertThrowRegex(regex, execute=False):
self.assertRaisesRegex(
AnsibleError,
regex,
action_base._fixup_perms2,
remote_paths,
remote_user=remote_user,
execute=execute)
def get_shell_option_for_arg(args_kv, default):
'''A helper for get_shell_option. Returns a function that, if
called with ``option`` that exists in args_kv, will return the
value, else will return ``default`` for every other given arg'''
def _helper(option, *args, **kwargs):
return args_kv.get(option, default)
return _helper
action_base.get_become_option = MagicMock()
action_base.get_become_option.return_value = 'remoteuser2'
# Step 1: On windows, we just return remote_paths
action_base._connection._shell._IS_WINDOWS = True
assertSuccess(execute=False)
assertSuccess(execute=True)
# But if we're not on windows....we have more work to do.
action_base._connection._shell._IS_WINDOWS = False
# Step 2: We're /not/ becoming an unprivileged user
action_base._remote_chmod = MagicMock()
action_base._is_become_unprivileged = MagicMock()
action_base._is_become_unprivileged.return_value = False
# Two subcases:
# - _remote_chmod rc is 0
# - _remote-chmod rc is not 0, something failed
action_base._remote_chmod.return_value = {
'rc': 0,
'stdout': 'some stuff here',
'stderr': '',
}
assertSuccess(execute=True)
# When execute=False, we just get the list back. But add it here for
# completion. chmod is never called.
assertSuccess()
action_base._remote_chmod.return_value = {
'rc': 1,
'stdout': 'some stuff here',
'stderr': 'and here',
}
assertThrowRegex(
'Failed to set execute bit on remote files',
execute=True)
# Step 3: we are becoming unprivileged
action_base._is_become_unprivileged.return_value = True
# Step 3a: setfacl
action_base._remote_set_user_facl = MagicMock()
action_base._remote_set_user_facl.return_value = {
'rc': 0,
'stdout': '',
'stderr': '',
}
assertSuccess()
# Step 3b: chmod +x if we need to
# To get here, setfacl failed, so mock it as such.
action_base._remote_set_user_facl.return_value = {
'rc': 1,
'stdout': '',
'stderr': '',
}
action_base._remote_chmod.return_value = {
'rc': 1,
'stdout': 'some stuff here',
'stderr': '',
}
assertThrowRegex(
'Failed to set file mode on remote temporary file',
execute=True)
action_base._remote_chmod.return_value = {
'rc': 0,
'stdout': 'some stuff here',
'stderr': '',
}
assertSuccess(execute=True)
# Step 3c: chown
action_base._remote_chown = MagicMock()
action_base._remote_chown.return_value = {
'rc': 0,
'stdout': '',
'stderr': '',
}
assertSuccess()
action_base._remote_chown.return_value = {
'rc': 1,
'stdout': '',
'stderr': '',
}
remote_user = 'root'
action_base._get_admin_users = MagicMock()
action_base._get_admin_users.return_value = ['root']
assertThrowRegex('user would be unable to read the file.')
remote_user = 'remoteuser1'
# Step 3d: chmod +a on osx
assertSuccess()
action_base._remote_chmod.assert_called_with(
['remoteuser2 allow read'] + remote_paths,
'+a')
# This case can cause Solaris chmod to return 5 which the ssh plugin
# treats as failure. To prevent a regression and ensure we still try the
# rest of the cases below, we mock the thrown exception here.
# This function ensures that only the macOS case (+a) throws this.
def raise_if_plus_a(definitely_not_underscore, mode):
if mode == '+a':
raise AnsibleAuthenticationFailure()
return {'rc': 0, 'stdout': '', 'stderr': ''}
action_base._remote_chmod.side_effect = raise_if_plus_a
assertSuccess()
# Step 3e: chmod A+ on Solaris
# We threw AnsibleAuthenticationFailure above, try Solaris fallback.
# Based on our lambda above, it should be successful.
action_base._remote_chmod.assert_called_with(
remote_paths,
'A+user:remoteuser2:r:allow')
assertSuccess()
# Step 3f: Common group
def rc_1_if_chmod_acl(definitely_not_underscore, mode):
rc = 0
if mode in CHMOD_ACL_FLAGS:
rc = 1
return {'rc': rc, 'stdout': '', 'stderr': ''}
action_base._remote_chmod = MagicMock()
action_base._remote_chmod.side_effect = rc_1_if_chmod_acl
get_shell_option = action_base.get_shell_option
action_base.get_shell_option = MagicMock()
action_base.get_shell_option.side_effect = get_shell_option_for_arg(
{
'common_remote_group': 'commongroup',
},
None)
action_base._remote_chgrp = MagicMock()
action_base._remote_chgrp.return_value = {
'rc': 0,
'stdout': '',
'stderr': '',
}
# TODO: Add test to assert warning is shown if
# ALLOW_WORLD_READABLE_TMPFILES is set in this case.
assertSuccess()
action_base._remote_chgrp.assert_called_once_with(
remote_paths,
'commongroup')
# Step 4: world-readable tmpdir
action_base.get_shell_option.side_effect = get_shell_option_for_arg(
{
'world_readable_temp': True,
'common_remote_group': None,
},
None)
action_base._remote_chmod.return_value = {
'rc': 0,
'stdout': 'some stuff here',
'stderr': '',
}
assertSuccess()
action_base._remote_chmod = MagicMock()
action_base._remote_chmod.return_value = {
'rc': 1,
'stdout': 'some stuff here',
'stderr': '',
}
assertThrowRegex('Failed to set file mode on remote files')
# Otherwise if we make it here in this state, we hit the catch-all
action_base.get_shell_option.side_effect = get_shell_option_for_arg(
{},
None)
assertThrowRegex('on the temporary files Ansible needs to create')
def test_action_base__remove_tmp_path(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection._shell.remove.return_value = 'rm some stuff'
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._low_level_execute_command = MagicMock()
# these don't really return anything or raise errors, so
# we're pretty much calling these for coverage right now
action_base._remove_tmp_path('/bad/path/dont/remove')
action_base._remove_tmp_path('/good/path/to/ansible-tmp-thing')
@patch('os.unlink')
@patch('os.fdopen')
@patch('tempfile.mkstemp')
def test_action_base__transfer_data(self, mock_mkstemp, mock_fdopen, mock_unlink):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
mock_connection.put_file.return_value = None
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
mock_afd = MagicMock()
mock_afile = MagicMock()
mock_mkstemp.return_value = (mock_afd, mock_afile)
mock_unlink.return_value = None
mock_afo = MagicMock()
mock_afo.write.return_value = None
mock_afo.flush.return_value = None
mock_afo.close.return_value = None
mock_fdopen.return_value = mock_afo
self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some data'), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', 'some mixed data: fö〩'), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='some value')), '/path/to/remote/file')
self.assertEqual(action_base._transfer_data('/path/to/remote/file', dict(some_key='fö〩')), '/path/to/remote/file')
mock_afo.write.side_effect = Exception()
self.assertRaises(AnsibleError, action_base._transfer_data, '/path/to/remote/file', '')
def test_action_base__execute_remote_stat(self):
# create our fake task
mock_task = MagicMock()
# create a mock connection, so we don't actually try and connect to things
mock_connection = MagicMock()
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
action_base._execute_module = MagicMock()
# test normal case
action_base._execute_module.return_value = dict(stat=dict(checksum='1111111111111111111111111111111111', exists=True))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertEqual(res['checksum'], '1111111111111111111111111111111111')
# test does not exist
action_base._execute_module.return_value = dict(stat=dict(exists=False))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertFalse(res['exists'])
self.assertEqual(res['checksum'], '1')
# test no checksum in result from _execute_module
action_base._execute_module.return_value = dict(stat=dict(exists=True))
res = action_base._execute_remote_stat(path='/path/to/file', all_vars=dict(), follow=False)
self.assertTrue(res['exists'])
self.assertEqual(res['checksum'], '')
# test stat call failed
action_base._execute_module.return_value = dict(failed=True, msg="because I said so")
self.assertRaises(AnsibleError, action_base._execute_remote_stat, path='/path/to/file', all_vars=dict(), follow=False)
def test_action_base__execute_module(self):
# create our fake task
mock_task = MagicMock()
mock_task.action = 'copy'
mock_task.args = dict(a=1, b=2, c=3)
# create a mock connection, so we don't actually try and connect to things
def build_module_command(env_string, shebang, cmd, arg_path=None):
to_run = [env_string, cmd]
if arg_path:
to_run.append(arg_path)
return " ".join(to_run)
def get_option(option):
return {'admin_users': ['root', 'toor']}.get(option)
mock_connection = MagicMock()
mock_connection.build_module_command.side_effect = build_module_command
mock_connection.socket_path = None
mock_connection._shell.get_remote_filename.return_value = 'copy.py'
mock_connection._shell.join_path.side_effect = os.path.join
mock_connection._shell.tmpdir = '/var/tmp/mytempdir'
mock_connection._shell.get_option = get_option
# we're using a real play context here
play_context = PlayContext()
# our test class
action_base = DerivedActionBase(
task=mock_task,
connection=mock_connection,
play_context=play_context,
loader=None,
templar=None,
shared_loader_obj=None,
)
# fake a lot of methods as we test those elsewhere
action_base._configure_module = MagicMock()
action_base._supports_check_mode = MagicMock()
action_base._is_pipelining_enabled = MagicMock()
action_base._make_tmp_path = MagicMock()
action_base._transfer_data = MagicMock()
action_base._compute_environment_string = MagicMock()
action_base._low_level_execute_command = MagicMock()
action_base._fixup_perms2 = MagicMock()
action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._compute_environment_string.return_value = ''
action_base._connection.has_pipelining = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
action_base._low_level_execute_command.return_value = dict(stdout='{"rc": 0, "stdout": "ok"}')
self.assertEqual(action_base._execute_module(module_name=None, module_args=None), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
self.assertEqual(
action_base._execute_module(
module_name='foo',
module_args=dict(z=9, y=8, x=7),
task_vars=dict(a=1)
),
dict(
_ansible_parsed=True,
rc=0,
stdout="ok",
stdout_lines=['ok'],
)
)
# test with needing/removing a remote tmp path
action_base._configure_module.return_value = ('old', '#!/usr/bin/python', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
action_base._configure_module.return_value = ('non_native_want_json', '#!/usr/bin/python', 'this is the module data', 'path')
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
play_context.become = True
play_context.become_user = 'foo'
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
# test an invalid shebang return
action_base._configure_module.return_value = ('new', '', 'this is the module data', 'path')
action_base._is_pipelining_enabled.return_value = False
action_base._make_tmp_path.return_value = '/the/tmp/path'
self.assertRaises(AnsibleError, action_base._execute_module)
# test with check mode enabled, once with support for check
# mode and once with support disabled to raise an error
play_context.check_mode = True
action_base._configure_module.return_value = ('new', '#!/usr/bin/python', 'this is the module data', 'path')
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
action_base._supports_check_mode = False
self.assertRaises(AnsibleError, action_base._execute_module)
def test_action_base_sudo_only_if_user_differs(self):
fake_loader = MagicMock()
fake_loader.get_basedir.return_value = os.getcwd()
play_context = PlayContext()
action_base = DerivedActionBase(None, None, play_context, fake_loader, None, None)
action_base.get_become_option = MagicMock(return_value='root')
action_base._get_remote_user = MagicMock(return_value='root')
action_base._connection = MagicMock(exec_command=MagicMock(return_value=(0, '', '')))
action_base._connection._shell = shell = MagicMock(append_command=MagicMock(return_value=('JOINED CMD')))
action_base._connection.become = become = MagicMock()
become.build_become_command.return_value = 'foo'
action_base._low_level_execute_command('ECHO', sudoable=True)
become.build_become_command.assert_not_called()
action_base._get_remote_user.return_value = 'apo'
action_base._low_level_execute_command('ECHO', sudoable=True, executable='/bin/csh')
become.build_become_command.assert_called_once_with("ECHO", shell)
become.build_become_command.reset_mock()
with patch.object(C, 'BECOME_ALLOW_SAME_USER', new=True):
action_base._get_remote_user.return_value = 'root'
action_base._low_level_execute_command('ECHO SAME', sudoable=True)
become.build_become_command.assert_called_once_with("ECHO SAME", shell)
def test__remote_expand_user_relative_pathing(self):
action_base = _action_base()
action_base._play_context.remote_addr = 'bar'
action_base._low_level_execute_command = MagicMock(return_value={'stdout': b'../home/user'})
action_base._connection._shell.join_path.return_value = '../home/user/foo'
with self.assertRaises(AnsibleError) as cm:
action_base._remote_expand_user('~/foo')
self.assertEqual(
cm.exception.message,
"'bar' returned an invalid relative home directory path containing '..'"
)
class TestActionBaseCleanReturnedData(unittest.TestCase):
def test(self):
fake_loader = DictDataLoader({
})
mock_module_loader = MagicMock()
mock_shared_loader_obj = MagicMock()
mock_shared_loader_obj.module_loader = mock_module_loader
connection_loader_paths = ['/tmp/asdfadf', '/usr/lib64/whatever',
'dfadfasf',
'foo.py',
'.*',
# FIXME: a path with parans breaks the regex
# '(.*)',
'/path/to/ansible/lib/ansible/plugins/connection/custom_connection.py',
'/path/to/ansible/lib/ansible/plugins/connection/ssh.py']
def fake_all(path_only=None):
for path in connection_loader_paths:
yield path
mock_connection_loader = MagicMock()
mock_connection_loader.all = fake_all
mock_shared_loader_obj.connection_loader = mock_connection_loader
mock_connection = MagicMock()
# mock_connection._shell.env_prefix.side_effect = env_prefix
# action_base = DerivedActionBase(mock_task, mock_connection, play_context, None, None, None)
action_base = DerivedActionBase(task=None,
connection=mock_connection,
play_context=None,
loader=fake_loader,
templar=None,
shared_loader_obj=mock_shared_loader_obj)
data = {'ansible_playbook_python': '/usr/bin/python',
# 'ansible_rsync_path': '/usr/bin/rsync',
'ansible_python_interpreter': '/usr/bin/python',
'ansible_ssh_some_var': 'whatever',
'ansible_ssh_host_key_somehost': 'some key here',
'some_other_var': 'foo bar'}
data = clean_facts(data)
self.assertNotIn('ansible_playbook_python', data)
self.assertNotIn('ansible_python_interpreter', data)
self.assertIn('ansible_ssh_host_key_somehost', data)
self.assertIn('some_other_var', data)
class TestActionBaseParseReturnedData(unittest.TestCase):
def test_fail_no_json(self):
action_base = _action_base()
rc = 0
stdout = 'foo\nbar\n'
err = 'oopsy'
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertFalse(res['_ansible_parsed'])
self.assertTrue(res['failed'])
self.assertEqual(res['module_stderr'], err)
def test_json_empty(self):
action_base = _action_base()
rc = 0
stdout = '{}\n'
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
del res['_ansible_parsed'] # we always have _ansible_parsed
self.assertEqual(len(res), 0)
self.assertFalse(res)
def test_json_facts(self):
action_base = _action_base()
rc = 0
stdout = '{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"}}\n'
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertTrue(res['ansible_facts'])
self.assertIn('ansible_blip', res['ansible_facts'])
# TODO: Should this be an AnsibleUnsafe?
# self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
def test_json_facts_add_host(self):
action_base = _action_base()
rc = 0
stdout = '''{"ansible_facts": {"foo": "bar", "ansible_blip": "blip_value"},
"add_host": {"host_vars": {"some_key": ["whatever the add_host object is"]}
}
}\n'''
err = ''
returned_data = {'rc': rc,
'stdout': stdout,
'stdout_lines': stdout.splitlines(),
'stderr': err}
res = action_base._parse_returned_data(returned_data)
self.assertTrue(res['ansible_facts'])
self.assertIn('ansible_blip', res['ansible_facts'])
self.assertIn('add_host', res)
# TODO: Should this be an AnsibleUnsafe?
# self.assertIsInstance(res['ansible_facts'], AnsibleUnsafe)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,416 |
galaxy validate_certs remains a stringt type instead of typecast to bool
|
### Summary
ansible.cfg's validate_certs option for galaxy is never typecast to bool and a "False" is actually truthy.
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel 6d260ad967) last updated 2022/03/30 10:48:12 (GMT -400)
config file = /home/jtanner/workspace/github/jctanner.redhat/ansible-hub-ui.tags/deleteme/ansible.cfg
configured module search path = ['/home/jtanner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jtanner/workspace/github/ansible/ansible/lib/ansible
ansible collection location = /home/jtanner/.ansible/collections:/usr/share/ansible/collections
executable location = /home/jtanner/venvs/galaxydev/bin/ansible
python version = 3.10.3 (main, Mar 18 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
GALAXY_SERVER_LIST(/home/jtanner/workspace/github/jctanner.redhat/ansible-hub-ui.tags/deleteme/ansible.cfg) = ['hub']
```
### OS / Environment
Fedora25
### Steps to Reproduce
```
$ cat ansible.cfg | fgrep -v token=
[galaxy]
server_list = hub
#validate_certs=false
[galaxy_server.hub]
validate_certs=False
username=jdoe
password=redhat
auth_url=https://keycloak-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/auth/realms/redhat-external/protocol/openid-connect/token
url=https://front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/api/automation-hub/
```
```
ansible-galaxy -vvvv collection publish foo23/test/foo23-test-1.0.0.tar.gz
```
### Expected Results
Certs should be ignored.
### Actual Results
```console
ERROR! Unknown error when attempting to call Galaxy at 'https://front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/api/automation-hub/api': <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com'. (_ssl.c:997)>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77416
|
https://github.com/ansible/ansible/pull/77424
|
bf8e186c68dc97f7410fab78635d4b15e79ce800
|
87a8fedd945fbec8b7b13664dd3522c4f526bfde
| 2022-03-30T15:51:02Z |
python
| 2022-03-31T19:09:10Z |
changelogs/fragments/77424-fix-False-ansible-galaxy-server-config-options.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,416 |
galaxy validate_certs remains a stringt type instead of typecast to bool
|
### Summary
ansible.cfg's validate_certs option for galaxy is never typecast to bool and a "False" is actually truthy.
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel 6d260ad967) last updated 2022/03/30 10:48:12 (GMT -400)
config file = /home/jtanner/workspace/github/jctanner.redhat/ansible-hub-ui.tags/deleteme/ansible.cfg
configured module search path = ['/home/jtanner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jtanner/workspace/github/ansible/ansible/lib/ansible
ansible collection location = /home/jtanner/.ansible/collections:/usr/share/ansible/collections
executable location = /home/jtanner/venvs/galaxydev/bin/ansible
python version = 3.10.3 (main, Mar 18 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
GALAXY_SERVER_LIST(/home/jtanner/workspace/github/jctanner.redhat/ansible-hub-ui.tags/deleteme/ansible.cfg) = ['hub']
```
### OS / Environment
Fedora25
### Steps to Reproduce
```
$ cat ansible.cfg | fgrep -v token=
[galaxy]
server_list = hub
#validate_certs=false
[galaxy_server.hub]
validate_certs=False
username=jdoe
password=redhat
auth_url=https://keycloak-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/auth/realms/redhat-external/protocol/openid-connect/token
url=https://front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/api/automation-hub/
```
```
ansible-galaxy -vvvv collection publish foo23/test/foo23-test-1.0.0.tar.gz
```
### Expected Results
Certs should be ignored.
### Actual Results
```console
ERROR! Unknown error when attempting to call Galaxy at 'https://front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/api/automation-hub/api': <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com'. (_ssl.c:997)>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77416
|
https://github.com/ansible/ansible/pull/77424
|
bf8e186c68dc97f7410fab78635d4b15e79ce800
|
87a8fedd945fbec8b7b13664dd3522c4f526bfde
| 2022-03-30T15:51:02Z |
python
| 2022-03-31T19:09:10Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import re
import shutil
import sys
import textwrap
import time
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
SERVER_DEF = [
('url', True),
('username', False),
('password', False),
('token', False),
('auth_url', False),
('v3', False),
('validate_certs', False),
('client_id', False),
]
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
artifacts_manager_kwargs = {'validate_certs': not context.CLIARGS['ignore_certs']}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError("{value} is not a valid signature count value")
return value
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self._api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=AnsibleCollectionConfig.collection_paths,
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
if self._implicit_role and ('-r' in self._raw_args or '--role-file' in self._raw_args):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
validate_certs_fallback = not context.CLIARGS['ignore_certs']
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
client_id = server_options.pop('client_id', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
available_api_versions = None
v3 = server_options.pop('v3', None)
validate_certs = server_options['validate_certs']
if validate_certs is None:
validate_certs = validate_certs_fallback
server_options['validate_certs'] = validate_certs
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs_fallback,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs_fallback,
**galaxy_options
))
return context.CLIARGS['func']()
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=not context.CLIARGS['ignore_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
output_format = context.CLIARGS['output_format']
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = AnsibleCollectionConfig.collection_paths
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
try:
collection = Requirement.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
six.raise_from(AnsibleError(val_err), val_err)
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver}
}
continue
fqcn_width, version_width = _get_collection_widths([collection])
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = list(find_existing_collections(
collection_path, artifacts_manager,
))
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
for collection in sorted(collections, key=to_text):
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,416 |
galaxy validate_certs remains a stringt type instead of typecast to bool
|
### Summary
ansible.cfg's validate_certs option for galaxy is never typecast to bool and a "False" is actually truthy.
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel 6d260ad967) last updated 2022/03/30 10:48:12 (GMT -400)
config file = /home/jtanner/workspace/github/jctanner.redhat/ansible-hub-ui.tags/deleteme/ansible.cfg
configured module search path = ['/home/jtanner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jtanner/workspace/github/ansible/ansible/lib/ansible
ansible collection location = /home/jtanner/.ansible/collections:/usr/share/ansible/collections
executable location = /home/jtanner/venvs/galaxydev/bin/ansible
python version = 3.10.3 (main, Mar 18 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
GALAXY_SERVER_LIST(/home/jtanner/workspace/github/jctanner.redhat/ansible-hub-ui.tags/deleteme/ansible.cfg) = ['hub']
```
### OS / Environment
Fedora25
### Steps to Reproduce
```
$ cat ansible.cfg | fgrep -v token=
[galaxy]
server_list = hub
#validate_certs=false
[galaxy_server.hub]
validate_certs=False
username=jdoe
password=redhat
auth_url=https://keycloak-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/auth/realms/redhat-external/protocol/openid-connect/token
url=https://front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/api/automation-hub/
```
```
ansible-galaxy -vvvv collection publish foo23/test/foo23-test-1.0.0.tar.gz
```
### Expected Results
Certs should be ignored.
### Actual Results
```console
ERROR! Unknown error when attempting to call Galaxy at 'https://front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/api/automation-hub/api': <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'front-end-aggregator-ephemeral-yhd7ju.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com'. (_ssl.c:997)>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77416
|
https://github.com/ansible/ansible/pull/77424
|
bf8e186c68dc97f7410fab78635d4b15e79ce800
|
87a8fedd945fbec8b7b13664dd3522c4f526bfde
| 2022-03-30T15:51:02Z |
python
| 2022-03-31T19:09:10Z |
test/units/galaxy/test_collection.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import pytest
import re
import tarfile
import uuid
from hashlib import sha256
from io import BytesIO
from mock import MagicMock, mock_open, patch
import ansible.constants as C
from ansible import context
from ansible.cli.galaxy import GalaxyCLI, SERVER_DEF
from ansible.errors import AnsibleError
from ansible.galaxy import api, collection, token
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six.moves import builtins
from ansible.utils import context_objects as co
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_input(tmp_path_factory):
''' Creates a collection skeleton directory for build tests '''
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
galaxy_args = ['ansible-galaxy', 'collection', 'init', '%s.%s' % (namespace, collection),
'-c', '--init-path', test_dir, '--collection-skeleton', skeleton]
GalaxyCLI(args=galaxy_args).run()
collection_dir = os.path.join(test_dir, namespace, collection)
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Output'))
return collection_dir, output_dir
@pytest.fixture()
def collection_artifact(monkeypatch, tmp_path_factory):
''' Creates a temp collection artifact and mocked open_url instance for publishing tests '''
mock_open = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
mock_uuid = MagicMock()
mock_uuid.return_value.hex = 'uuid'
monkeypatch.setattr(uuid, 'uuid4', mock_uuid)
tmp_path = tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections')
input_file = to_text(tmp_path / 'collection.tar.gz')
with tarfile.open(input_file, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
return input_file, mock_open
@pytest.fixture()
def galaxy_yml_dir(request, tmp_path_factory):
b_test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
b_galaxy_yml = os.path.join(b_test_dir, b'galaxy.yml')
with open(b_galaxy_yml, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(request.param))
yield b_test_dir
@pytest.fixture()
def tmp_tarfile(tmp_path_factory, manifest_info):
''' Creates a temporary tar file for _extract_tar_file tests '''
filename = u'ÅÑŚÌβŁÈ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
b_data = to_bytes(json.dumps(manifest_info, indent=True), errors='surrogate_or_strict')
b_io = BytesIO(b_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(b_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
sha256_hash = sha256()
sha256_hash.update(data)
with tarfile.open(tar_file, 'r') as tfile:
yield temp_dir, tfile, filename, sha256_hash.hexdigest()
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com',
token=token.GalaxyToken(token='key'))
return galaxy_api
@pytest.fixture()
def manifest_template():
def get_manifest_info(namespace='ansible_namespace', name='collection', version='0.1.0'):
return {
"collection_info": {
"namespace": namespace,
"name": name,
"version": version,
"authors": [
"shertel"
],
"readme": "README.md",
"tags": [
"test",
"collection"
],
"description": "Test",
"license": [
"MIT"
],
"license_file": None,
"dependencies": {},
"repository": "https://github.com/{0}/{1}".format(namespace, name),
"documentation": None,
"homepage": None,
"issues": None
},
"file_manifest_file": {
"name": "FILES.json",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "files_manifest_checksum",
"format": 1
},
"format": 1
}
return get_manifest_info
@pytest.fixture()
def manifest_info(manifest_template):
return manifest_template()
@pytest.fixture()
def files_manifest_info():
return {
"files": [
{
"name": ".",
"ftype": "dir",
"chksum_type": None,
"chksum_sha256": None,
"format": 1
},
{
"name": "README.md",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "individual_file_checksum",
"format": 1
}
],
"format": 1}
@pytest.fixture()
def manifest(manifest_info):
b_data = to_bytes(json.dumps(manifest_info))
with patch.object(builtins, 'open', mock_open(read_data=b_data)) as m:
with open('MANIFEST.json', mode='rb') as fake_file:
yield fake_file, sha256(b_data).hexdigest()
@pytest.fixture()
def server_config(monkeypatch):
monkeypatch.setattr(C, 'GALAXY_SERVER_LIST', ['server1', 'server2', 'server3'])
default_options = dict((k, None) for k, v in SERVER_DEF)
server1 = dict(default_options)
server1.update({'url': 'https://galaxy.ansible.com/api/', 'validate_certs': False})
server2 = dict(default_options)
server2.update({'url': 'https://galaxy.ansible.com/api/', 'validate_certs': True})
server3 = dict(default_options)
server3.update({'url': 'https://galaxy.ansible.com/api/'})
return server1, server2, server3
@pytest.mark.parametrize(
'required_signature_count,valid',
[
("1", True),
("+1", True),
("all", True),
("+all", True),
("-1", False),
("invalid", False),
("1.5", False),
("+", False),
]
)
def test_cli_options(required_signature_count, valid, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
'--keyring',
'~/.ansible/pubring.kbx',
'--required-valid-signature-count',
required_signature_count
]
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
if valid:
galaxy_cli.run()
else:
with pytest.raises(SystemExit, match='2') as error:
galaxy_cli.run()
@pytest.mark.parametrize('global_ignore_certs', [True, False])
def test_validate_certs(global_ignore_certs, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
]
if global_ignore_certs:
cli_args.append('--ignore-certs')
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert len(galaxy_cli.api_servers) == 1
assert galaxy_cli.api_servers[0].validate_certs is not global_ignore_certs
@pytest.mark.parametrize('global_ignore_certs', [True, False])
def test_validate_certs_with_server_url(global_ignore_certs, monkeypatch):
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
'-s',
'https://galaxy.ansible.com'
]
if global_ignore_certs:
cli_args.append('--ignore-certs')
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert len(galaxy_cli.api_servers) == 1
assert galaxy_cli.api_servers[0].validate_certs is not global_ignore_certs
@pytest.mark.parametrize('global_ignore_certs', [True, False])
def test_validate_certs_with_server_config(global_ignore_certs, server_config, monkeypatch):
get_plugin_options = MagicMock(side_effect=server_config)
monkeypatch.setattr(C.config, 'get_plugin_options', get_plugin_options)
cli_args = [
'ansible-galaxy',
'collection',
'install',
'namespace.collection:1.0.0',
]
if global_ignore_certs:
cli_args.append('--ignore-certs')
galaxy_cli = GalaxyCLI(args=cli_args)
mock_execute_install = MagicMock()
monkeypatch.setattr(galaxy_cli, '_execute_install_collection', mock_execute_install)
galaxy_cli.run()
assert galaxy_cli.api_servers[0].validate_certs is False
assert galaxy_cli.api_servers[1].validate_certs is True
assert galaxy_cli.api_servers[2].validate_certs is not global_ignore_certs
def test_build_collection_no_galaxy_yaml():
fake_path = u'/fake/ÅÑŚÌβŁÈ/path'
expected = to_native("The collection galaxy.yml path '%s/galaxy.yml' does not exist." % fake_path)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(fake_path, u'output', False)
def test_build_existing_output_file(collection_input):
input_dir, output_dir = collection_input
existing_output_dir = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
os.makedirs(existing_output_dir)
expected = "The output collection artifact '%s' already exists, but is a directory - aborting" \
% to_native(existing_output_dir)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
def test_build_existing_output_without_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
expected = "The file '%s' already exists. You can use --force to re-create the collection artifact." \
% to_native(existing_output)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
def test_build_existing_output_with_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), True)
# Verify the file was replaced with an actual tar file
assert tarfile.is_tarfile(existing_output)
def test_build_with_existing_files_and_manifest(collection_input):
input_dir, output_dir = collection_input
with open(os.path.join(input_dir, 'MANIFEST.json'), "wb") as fd:
fd.write(b'{"collection_info": {"version": "6.6.6"}, "version": 1}')
with open(os.path.join(input_dir, 'FILES.json'), "wb") as fd:
fd.write(b'{"files": [], "format": 1}')
with open(os.path.join(input_dir, "plugins", "MANIFEST.json"), "wb") as fd:
fd.write(b"test data that should be in build")
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
output_artifact = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
assert tarfile.is_tarfile(output_artifact)
with tarfile.open(output_artifact, mode='r') as actual:
members = actual.getmembers()
manifest_file = next(m for m in members if m.path == "MANIFEST.json")
manifest_file_obj = actual.extractfile(manifest_file.name)
manifest_file_text = manifest_file_obj.read()
manifest_file_obj.close()
assert manifest_file_text != b'{"collection_info": {"version": "6.6.6"}, "version": 1}'
json_file = next(m for m in members if m.path == "MANIFEST.json")
json_file_obj = actual.extractfile(json_file.name)
json_file_text = json_file_obj.read()
json_file_obj.close()
assert json_file_text != b'{"files": [], "format": 1}'
sub_manifest_file = next(m for m in members if m.path == "plugins/MANIFEST.json")
sub_manifest_file_obj = actual.extractfile(sub_manifest_file.name)
sub_manifest_file_text = sub_manifest_file_obj.read()
sub_manifest_file_obj.close()
assert sub_manifest_file_text == b"test data that should be in build"
@pytest.mark.parametrize('galaxy_yml_dir', [b'namespace: value: broken'], indirect=True)
def test_invalid_yaml_galaxy_file(galaxy_yml_dir):
galaxy_file = os.path.join(galaxy_yml_dir, b'galaxy.yml')
expected = to_native(b"Failed to parse the galaxy.yml at '%s' with the following error:" % galaxy_file)
with pytest.raises(AnsibleError, match=expected):
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b'namespace: test_namespace'], indirect=True)
def test_missing_required_galaxy_key(galaxy_yml_dir):
galaxy_file = os.path.join(galaxy_yml_dir, b'galaxy.yml')
expected = "The collection galaxy.yml at '%s' is missing the following mandatory keys: authors, name, " \
"readme, version" % to_native(galaxy_file)
with pytest.raises(AnsibleError, match=expected):
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
invalid: value"""], indirect=True)
def test_warning_extra_keys(galaxy_yml_dir, monkeypatch):
display_mock = MagicMock()
monkeypatch.setattr(Display, 'warning', display_mock)
collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
assert display_mock.call_count == 1
assert display_mock.call_args[0][0] == "Found unknown keys in collection galaxy.yml at '%s/galaxy.yml': invalid"\
% to_text(galaxy_yml_dir)
@pytest.mark.parametrize('galaxy_yml_dir', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md"""], indirect=True)
def test_defaults_galaxy_yml(galaxy_yml_dir):
actual = collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
assert actual['namespace'] == 'namespace'
assert actual['name'] == 'collection'
assert actual['authors'] == ['Jordan']
assert actual['version'] == '0.1.0'
assert actual['readme'] == 'README.md'
assert actual['description'] is None
assert actual['repository'] is None
assert actual['documentation'] is None
assert actual['homepage'] is None
assert actual['issues'] is None
assert actual['tags'] == []
assert actual['dependencies'] == {}
assert actual['license'] == []
@pytest.mark.parametrize('galaxy_yml_dir', [(b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license: MIT"""), (b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license:
- MIT""")], indirect=True)
def test_galaxy_yml_list_value(galaxy_yml_dir):
actual = collection.concrete_artifact_manager._get_meta_from_src_dir(galaxy_yml_dir)
assert actual['license'] == ['MIT']
def test_build_ignore_files_and_folders(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
git_folder = os.path.join(input_dir, '.git')
retry_file = os.path.join(input_dir, 'ansible.retry')
tests_folder = os.path.join(input_dir, 'tests', 'output')
tests_output_file = os.path.join(tests_folder, 'result.txt')
os.makedirs(git_folder)
os.makedirs(tests_folder)
with open(retry_file, 'w+') as ignore_file:
ignore_file.write('random')
ignore_file.flush()
with open(tests_output_file, 'w+') as tests_file:
tests_file.write('random')
tests_file.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
assert actual['format'] == 1
for manifest_entry in actual['files']:
assert manifest_entry['name'] not in ['.git', 'ansible.retry', 'galaxy.yml', 'tests/output', 'tests/output/result.txt']
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s' for collection build" % to_text(retry_file),
"Skipping '%s' for collection build" % to_text(git_folder),
"Skipping '%s' for collection build" % to_text(tests_folder),
]
assert mock_display.call_count == 4
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
assert mock_display.mock_calls[2][1][0] in expected_msgs
assert mock_display.mock_calls[3][1][0] in expected_msgs
def test_build_ignore_older_release_in_root(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
# This is expected to be ignored because it is in the root collection dir.
release_file = os.path.join(input_dir, 'namespace-collection-0.0.0.tar.gz')
# This is not expected to be ignored because it is not in the root collection dir.
fake_release_file = os.path.join(input_dir, 'plugins', 'namespace-collection-0.0.0.tar.gz')
for filename in [release_file, fake_release_file]:
with open(filename, 'w+') as file_obj:
file_obj.write('random')
file_obj.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
assert actual['format'] == 1
plugin_release_found = False
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'namespace-collection-0.0.0.tar.gz'
if manifest_entry['name'] == 'plugins/namespace-collection-0.0.0.tar.gz':
plugin_release_found = True
assert plugin_release_found
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s' for collection build" % to_text(release_file)
]
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
def test_build_ignore_patterns(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection',
['*.md', 'plugins/action', 'playbooks/*.j2'])
assert actual['format'] == 1
expected_missing = [
'README.md',
'docs/My Collection.md',
'plugins/action',
'playbooks/templates/test.conf.j2',
'playbooks/templates/subfolder/test.conf.j2',
]
# Files or dirs that are close to a match but are not, make sure they are present
expected_present = [
'docs',
'roles/common/templates/test.conf.j2',
'roles/common/templates/subfolder/test.conf.j2',
]
actual_files = [e['name'] for e in actual['files']]
for m in expected_missing:
assert m not in actual_files
for p in expected_present:
assert p in actual_files
expected_msgs = [
"Skipping '%s/galaxy.yml' for collection build" % to_text(input_dir),
"Skipping '%s/README.md' for collection build" % to_text(input_dir),
"Skipping '%s/docs/My Collection.md' for collection build" % to_text(input_dir),
"Skipping '%s/plugins/action' for collection build" % to_text(input_dir),
"Skipping '%s/playbooks/templates/test.conf.j2' for collection build" % to_text(input_dir),
"Skipping '%s/playbooks/templates/subfolder/test.conf.j2' for collection build" % to_text(input_dir),
]
assert mock_display.call_count == len(expected_msgs)
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
assert mock_display.mock_calls[2][1][0] in expected_msgs
assert mock_display.mock_calls[3][1][0] in expected_msgs
assert mock_display.mock_calls[4][1][0] in expected_msgs
assert mock_display.mock_calls[5][1][0] in expected_msgs
def test_build_ignore_symlink_target_outside_collection(collection_input, monkeypatch):
input_dir, outside_dir = collection_input
mock_display = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_display)
link_path = os.path.join(input_dir, 'plugins', 'connection')
os.symlink(outside_dir, link_path)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'plugins/connection'
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == "Skipping '%s' as it is a symbolic link to a directory outside " \
"the collection" % to_text(link_path)
def test_build_copy_symlink_target_inside_collection(collection_input):
input_dir = collection_input[0]
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection', [])
linked_entries = [e for e in actual['files'] if e['name'].startswith('playbooks/roles/linked')]
assert len(linked_entries) == 1
assert linked_entries[0]['name'] == 'playbooks/roles/linked'
assert linked_entries[0]['ftype'] == 'dir'
def test_build_with_symlink_inside_collection(collection_input):
input_dir, output_dir = collection_input
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
file_link = os.path.join(input_dir, 'docs', 'README.md')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
os.symlink(os.path.join(input_dir, 'README.md'), file_link)
collection.build_collection(to_text(input_dir, errors='surrogate_or_strict'), to_text(output_dir, errors='surrogate_or_strict'), False)
output_artifact = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
assert tarfile.is_tarfile(output_artifact)
with tarfile.open(output_artifact, mode='r') as actual:
members = actual.getmembers()
linked_folder = next(m for m in members if m.path == 'playbooks/roles/linked')
assert linked_folder.type == tarfile.SYMTYPE
assert linked_folder.linkname == '../../roles/linked'
linked_file = next(m for m in members if m.path == 'docs/README.md')
assert linked_file.type == tarfile.SYMTYPE
assert linked_file.linkname == '../README.md'
linked_file_obj = actual.extractfile(linked_file.name)
actual_file = secure_hash_s(linked_file_obj.read())
linked_file_obj.close()
assert actual_file == '63444bfc766154e1bc7557ef6280de20d03fcd81'
def test_publish_no_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
collection.publish_collection(artifact_path, galaxy_server, False, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == \
"Collection has been pushed to the Galaxy server %s %s, not waiting until import has completed due to " \
"--no-wait being set. Import task results can be found at %s" % (galaxy_server.name, galaxy_server.api_server,
fake_import_uri)
def test_publish_with_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
mock_wait = MagicMock()
monkeypatch.setattr(galaxy_server, 'wait_import_task', mock_wait)
collection.publish_collection(artifact_path, galaxy_server, True, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_wait.call_count == 1
assert mock_wait.mock_calls[0][1][0] == '1234'
assert mock_display.mock_calls[0][1][0] == "Collection has been published to the Galaxy server test_server %s" \
% galaxy_server.api_server
def test_find_existing_collections(tmp_path_factory, monkeypatch):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
collection1 = os.path.join(test_dir, 'namespace1', 'collection1')
collection2 = os.path.join(test_dir, 'namespace2', 'collection2')
fake_collection1 = os.path.join(test_dir, 'namespace3', 'collection3')
fake_collection2 = os.path.join(test_dir, 'namespace4')
os.makedirs(collection1)
os.makedirs(collection2)
os.makedirs(os.path.split(fake_collection1)[0])
open(fake_collection1, 'wb+').close()
open(fake_collection2, 'wb+').close()
collection1_manifest = json.dumps({
'collection_info': {
'namespace': 'namespace1',
'name': 'collection1',
'version': '1.2.3',
'authors': ['Jordan Borean'],
'readme': 'README.md',
'dependencies': {},
},
'format': 1,
})
with open(os.path.join(collection1, 'MANIFEST.json'), 'wb') as manifest_obj:
manifest_obj.write(to_bytes(collection1_manifest))
mock_warning = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warning)
actual = list(collection.find_existing_collections(test_dir, artifacts_manager=concrete_artifact_cm))
assert len(actual) == 2
for actual_collection in actual:
if '%s.%s' % (actual_collection.namespace, actual_collection.name) == 'namespace1.collection1':
assert actual_collection.namespace == 'namespace1'
assert actual_collection.name == 'collection1'
assert actual_collection.ver == '1.2.3'
assert to_text(actual_collection.src) == collection1
else:
assert actual_collection.namespace == 'namespace2'
assert actual_collection.name == 'collection2'
assert actual_collection.ver == '*'
assert to_text(actual_collection.src) == collection2
assert mock_warning.call_count == 1
assert mock_warning.mock_calls[0][1][0] == "Collection at '%s' does not have a MANIFEST.json file, nor has it galaxy.yml: " \
"cannot detect version." % to_text(collection2)
def test_download_file(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
data = b"\x00\x01\x02\x03"
sha256_hash = sha256()
sha256_hash.update(data)
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
expected = temp_dir
actual = collection._download_file('http://google.com/file', temp_dir, sha256_hash.hexdigest(), True)
assert actual.startswith(expected)
assert os.path.isfile(actual)
with open(actual, 'rb') as file_obj:
assert file_obj.read() == data
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'http://google.com/file'
def test_download_file_hash_mismatch(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
data = b"\x00\x01\x02\x03"
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection.concrete_artifact_manager, 'open_url', mock_open)
expected = "Mismatch artifact hash with downloaded file"
with pytest.raises(AnsibleError, match=expected):
collection._download_file('http://google.com/file', temp_dir, 'bad', True)
def test_extract_tar_file_invalid_hash(tmp_tarfile):
temp_dir, tfile, filename, dummy = tmp_tarfile
expected = "Checksum mismatch for '%s' inside collection at '%s'" % (to_native(filename), to_native(tfile.name))
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, filename, temp_dir, temp_dir, "fakehash")
def test_extract_tar_file_missing_member(tmp_tarfile):
temp_dir, tfile, dummy, dummy = tmp_tarfile
expected = "Collection tar at '%s' does not contain the expected file 'missing'." % to_native(tfile.name)
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, 'missing', temp_dir, temp_dir)
def test_extract_tar_file_missing_parent_dir(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
output_dir = os.path.join(temp_dir, b'output')
output_file = os.path.join(output_dir, to_bytes(filename))
collection._extract_tar_file(tfile, filename, output_dir, temp_dir, checksum)
os.path.isfile(output_file)
def test_extract_tar_file_outside_dir(tmp_path_factory):
filename = u'ÅÑŚÌβŁÈ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
tar_filename = '../%s.sh' % filename
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(tar_filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
expected = re.escape("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(tar_filename))
with tarfile.open(tar_file, 'r') as tfile:
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, tar_filename, os.path.join(temp_dir, to_bytes(filename)), temp_dir)
def test_require_one_of_collections_requirements_with_both():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', 'namespace.collection', '-r', 'requirements.yml'])
with pytest.raises(AnsibleError) as req_err:
cli._require_one_of_collections_requirements(('namespace.collection',), 'requirements.yml')
with pytest.raises(AnsibleError) as cli_err:
cli.run()
assert req_err.value.message == cli_err.value.message == 'The positional collection_name arg and --requirements-file are mutually exclusive.'
def test_require_one_of_collections_requirements_with_neither():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify'])
with pytest.raises(AnsibleError) as req_err:
cli._require_one_of_collections_requirements((), '')
with pytest.raises(AnsibleError) as cli_err:
cli.run()
assert req_err.value.message == cli_err.value.message == 'You must specify a collection name or a requirements file.'
def test_require_one_of_collections_requirements_with_collections():
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', 'namespace1.collection1', 'namespace2.collection1:1.0.0'])
collections = ('namespace1.collection1', 'namespace2.collection1:1.0.0',)
requirements = cli._require_one_of_collections_requirements(collections, '')['collections']
req_tuples = [('%s.%s' % (req.namespace, req.name), req.ver, req.src, req.type,) for req in requirements]
assert req_tuples == [('namespace1.collection1', '*', None, 'galaxy'), ('namespace2.collection1', '1.0.0', None, 'galaxy')]
@patch('ansible.cli.galaxy.GalaxyCLI._parse_requirements_file')
def test_require_one_of_collections_requirements_with_requirements(mock_parse_requirements_file, galaxy_server):
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'verify', '-r', 'requirements.yml', 'namespace.collection'])
mock_parse_requirements_file.return_value = {'collections': [('namespace.collection', '1.0.5', galaxy_server)]}
requirements = cli._require_one_of_collections_requirements((), 'requirements.yml')['collections']
assert mock_parse_requirements_file.call_count == 1
assert requirements == [('namespace.collection', '1.0.5', galaxy_server)]
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify', spec=True)
def test_call_GalaxyCLI(execute_verify):
galaxy_args = ['ansible-galaxy', 'collection', 'verify', 'namespace.collection']
GalaxyCLI(args=galaxy_args).run()
assert execute_verify.call_count == 1
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify')
def test_call_GalaxyCLI_with_implicit_role(execute_verify):
galaxy_args = ['ansible-galaxy', 'verify', 'namespace.implicit_role']
with pytest.raises(SystemExit):
GalaxyCLI(args=galaxy_args).run()
assert not execute_verify.called
@patch('ansible.cli.galaxy.GalaxyCLI.execute_verify')
def test_call_GalaxyCLI_with_role(execute_verify):
galaxy_args = ['ansible-galaxy', 'role', 'verify', 'namespace.role']
with pytest.raises(SystemExit):
GalaxyCLI(args=galaxy_args).run()
assert not execute_verify.called
@patch('ansible.cli.galaxy.verify_collections', spec=True)
def test_execute_verify_with_defaults(mock_verify_collections):
galaxy_args = ['ansible-galaxy', 'collection', 'verify', 'namespace.collection:1.0.4']
GalaxyCLI(args=galaxy_args).run()
assert mock_verify_collections.call_count == 1
print("Call args {0}".format(mock_verify_collections.call_args[0]))
requirements, search_paths, galaxy_apis, ignore_errors = mock_verify_collections.call_args[0]
assert [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type) for r in requirements] == [('namespace.collection', '1.0.4', None, 'galaxy')]
for install_path in search_paths:
assert install_path.endswith('ansible_collections')
assert galaxy_apis[0].api_server == 'https://galaxy.ansible.com'
assert ignore_errors is False
@patch('ansible.cli.galaxy.verify_collections', spec=True)
def test_execute_verify(mock_verify_collections):
GalaxyCLI(args=[
'ansible-galaxy', 'collection', 'verify', 'namespace.collection:1.0.4', '--ignore-certs',
'-p', '~/.ansible', '--ignore-errors', '--server', 'http://galaxy-dev.com',
]).run()
assert mock_verify_collections.call_count == 1
requirements, search_paths, galaxy_apis, ignore_errors = mock_verify_collections.call_args[0]
assert [('%s.%s' % (r.namespace, r.name), r.ver, r.src, r.type) for r in requirements] == [('namespace.collection', '1.0.4', None, 'galaxy')]
for install_path in search_paths:
assert install_path.endswith('ansible_collections')
assert galaxy_apis[0].api_server == 'http://galaxy-dev.com'
assert ignore_errors is True
def test_verify_file_hash_deleted_file(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=False)) as mock_isfile:
collection._verify_file_hash(b'path/', 'file', digest, error_queue)
assert mock_isfile.called_once
assert len(error_queue) == 1
assert error_queue[0].installed is None
assert error_queue[0].expected == digest
def test_verify_file_hash_matching_hash(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=True)) as mock_isfile:
collection._verify_file_hash(b'path/', 'file', digest, error_queue)
assert mock_isfile.called_once
assert error_queue == []
def test_verify_file_hash_mismatching_hash(manifest_info):
data = to_bytes(json.dumps(manifest_info))
digest = sha256(data).hexdigest()
different_digest = 'not_{0}'.format(digest)
namespace = manifest_info['collection_info']['namespace']
name = manifest_info['collection_info']['name']
version = manifest_info['collection_info']['version']
server = 'http://galaxy.ansible.com'
error_queue = []
with patch.object(builtins, 'open', mock_open(read_data=data)) as m:
with patch.object(collection.os.path, 'isfile', MagicMock(return_value=True)) as mock_isfile:
collection._verify_file_hash(b'path/', 'file', different_digest, error_queue)
assert mock_isfile.called_once
assert len(error_queue) == 1
assert error_queue[0].installed == digest
assert error_queue[0].expected == different_digest
def test_consume_file(manifest):
manifest_file, checksum = manifest
assert checksum == collection._consume_file(manifest_file)
def test_consume_file_and_write_contents(manifest, manifest_info):
manifest_file, checksum = manifest
write_to = BytesIO()
actual_hash = collection._consume_file(manifest_file, write_to)
write_to.seek(0)
assert to_bytes(json.dumps(manifest_info)) == write_to.read()
assert actual_hash == checksum
def test_get_tar_file_member(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
with collection._get_tar_file_member(tfile, filename) as (tar_file_member, tar_file_obj):
assert isinstance(tar_file_member, tarfile.TarInfo)
assert isinstance(tar_file_obj, tarfile.ExFileObject)
def test_get_nonexistent_tar_file_member(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
file_does_not_exist = filename + 'nonexistent'
with pytest.raises(AnsibleError) as err:
collection._get_tar_file_member(tfile, file_does_not_exist)
assert to_text(err.value.message) == "Collection tar at '%s' does not contain the expected file '%s'." % (to_text(tfile.name), file_does_not_exist)
def test_get_tar_file_hash(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
assert checksum == collection._get_tar_file_hash(tfile.name, filename)
def test_get_json_from_tar_file(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
assert 'MANIFEST.json' in tfile.getnames()
data = collection._get_json_from_tar_file(tfile.name, 'MANIFEST.json')
assert isinstance(data, dict)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,395 |
Remove deprecated DEFAULT_LIBVIRT_LXC_NOSECLABEL.env.0
|
### Summary
The config option `DEFAULT_LIBVIRT_LXC_NOSECLABEL.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77395
|
https://github.com/ansible/ansible/pull/77426
|
e918cfa588ff7b6c98c90fef2798250a26c94e01
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
| 2022-03-29T21:34:06Z |
python
| 2022-03-31T19:11:20Z |
changelogs/fragments/77395-remove-libvirt_lxc_noseclabel.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,395 |
Remove deprecated DEFAULT_LIBVIRT_LXC_NOSECLABEL.env.0
|
### Summary
The config option `DEFAULT_LIBVIRT_LXC_NOSECLABEL.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77395
|
https://github.com/ansible/ansible/pull/77426
|
e918cfa588ff7b6c98c90fef2798250a26c94e01
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
| 2022-03-29T21:34:06Z |
python
| 2022-03-31T19:11:20Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- python2.6
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,399 |
Remove deprecated options.display_skipped_hosts.env.0
|
### Summary
The config option `options.display_skipped_hosts.env.0` should be removed from `lib/ansible/plugins/callback/default.py`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/plugins/callback/default.py`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77399
|
https://github.com/ansible/ansible/pull/77427
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
| 2022-03-29T21:34:12Z |
python
| 2022-03-31T19:11:31Z |
changelogs/fragments/77396-remove-display_skipped_hosts.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,399 |
Remove deprecated options.display_skipped_hosts.env.0
|
### Summary
The config option `options.display_skipped_hosts.env.0` should be removed from `lib/ansible/plugins/callback/default.py`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/plugins/callback/default.py`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77399
|
https://github.com/ansible/ansible/pull/77427
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
| 2022-03-29T21:34:12Z |
python
| 2022-03-31T19:11:31Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- python2.6
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,399 |
Remove deprecated options.display_skipped_hosts.env.0
|
### Summary
The config option `options.display_skipped_hosts.env.0` should be removed from `lib/ansible/plugins/callback/default.py`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/plugins/callback/default.py`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77399
|
https://github.com/ansible/ansible/pull/77427
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
| 2022-03-29T21:34:12Z |
python
| 2022-03-31T19:11:31Z |
lib/ansible/plugins/doc_fragments/default_callback.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r'''
options:
display_skipped_hosts:
name: Show skipped hosts
description: "Toggle to control displaying skipped task/host results in a task"
type: bool
default: yes
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without C(ANSIBLE_) prefix are deprecated
version: "2.12"
alternatives: the C(ANSIBLE_DISPLAY_SKIPPED_HOSTS) environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- key: display_skipped_hosts
section: defaults
display_ok_hosts:
name: Show 'ok' hosts
description: "Toggle to control displaying 'ok' task/host results in a task"
type: bool
default: yes
env:
- name: ANSIBLE_DISPLAY_OK_HOSTS
ini:
- key: display_ok_hosts
section: defaults
version_added: '2.7'
display_failed_stderr:
name: Use STDERR for failed and unreachable tasks
description: "Toggle to control whether failed and unreachable tasks are displayed to STDERR (vs. STDOUT)"
type: bool
default: no
env:
- name: ANSIBLE_DISPLAY_FAILED_STDERR
ini:
- key: display_failed_stderr
section: defaults
version_added: '2.7'
show_custom_stats:
name: Show custom stats
description: 'This adds the custom stats set via the set_stats plugin to the play recap'
type: bool
default: no
env:
- name: ANSIBLE_SHOW_CUSTOM_STATS
ini:
- key: show_custom_stats
section: defaults
show_per_host_start:
name: Show per host task start
description: 'This adds output that shows when a task is started to execute for each host'
type: bool
default: no
env:
- name: ANSIBLE_SHOW_PER_HOST_START
ini:
- key: show_per_host_start
section: defaults
version_added: '2.9'
check_mode_markers:
name: Show markers when running in check mode
description:
- Toggle to control displaying markers when running in check mode.
- "The markers are C(DRY RUN) at the beginning and ending of playbook execution (when calling C(ansible-playbook --check))
and C(CHECK MODE) as a suffix at every play and task that is run in check mode."
type: bool
default: no
version_added: '2.9'
env:
- name: ANSIBLE_CHECK_MODE_MARKERS
ini:
- key: check_mode_markers
section: defaults
show_task_path_on_failure:
name: Show file path on failed tasks
description:
When a task fails, display the path to the file containing the failed task and the line number.
This information is displayed automatically for every task when running with C(-vv) or greater verbosity.
type: bool
default: no
env:
- name: ANSIBLE_SHOW_TASK_PATH_ON_FAILURE
ini:
- key: show_task_path_on_failure
section: defaults
version_added: '2.11'
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,396 |
Remove deprecated DISPLAY_SKIPPED_HOSTS.env.0
|
### Summary
The config option `DISPLAY_SKIPPED_HOSTS.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77396
|
https://github.com/ansible/ansible/pull/77427
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
| 2022-03-29T21:34:08Z |
python
| 2022-03-31T19:11:31Z |
changelogs/fragments/77396-remove-display_skipped_hosts.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,396 |
Remove deprecated DISPLAY_SKIPPED_HOSTS.env.0
|
### Summary
The config option `DISPLAY_SKIPPED_HOSTS.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77396
|
https://github.com/ansible/ansible/pull/77427
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
| 2022-03-29T21:34:08Z |
python
| 2022-03-31T19:11:31Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- python2.6
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,396 |
Remove deprecated DISPLAY_SKIPPED_HOSTS.env.0
|
### Summary
The config option `DISPLAY_SKIPPED_HOSTS.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77396
|
https://github.com/ansible/ansible/pull/77427
|
ff8a854e571e8f6a22e34d2f285fbc4e86d5ec9c
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
| 2022-03-29T21:34:08Z |
python
| 2022-03-31T19:11:31Z |
lib/ansible/plugins/doc_fragments/default_callback.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r'''
options:
display_skipped_hosts:
name: Show skipped hosts
description: "Toggle to control displaying skipped task/host results in a task"
type: bool
default: yes
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without C(ANSIBLE_) prefix are deprecated
version: "2.12"
alternatives: the C(ANSIBLE_DISPLAY_SKIPPED_HOSTS) environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- key: display_skipped_hosts
section: defaults
display_ok_hosts:
name: Show 'ok' hosts
description: "Toggle to control displaying 'ok' task/host results in a task"
type: bool
default: yes
env:
- name: ANSIBLE_DISPLAY_OK_HOSTS
ini:
- key: display_ok_hosts
section: defaults
version_added: '2.7'
display_failed_stderr:
name: Use STDERR for failed and unreachable tasks
description: "Toggle to control whether failed and unreachable tasks are displayed to STDERR (vs. STDOUT)"
type: bool
default: no
env:
- name: ANSIBLE_DISPLAY_FAILED_STDERR
ini:
- key: display_failed_stderr
section: defaults
version_added: '2.7'
show_custom_stats:
name: Show custom stats
description: 'This adds the custom stats set via the set_stats plugin to the play recap'
type: bool
default: no
env:
- name: ANSIBLE_SHOW_CUSTOM_STATS
ini:
- key: show_custom_stats
section: defaults
show_per_host_start:
name: Show per host task start
description: 'This adds output that shows when a task is started to execute for each host'
type: bool
default: no
env:
- name: ANSIBLE_SHOW_PER_HOST_START
ini:
- key: show_per_host_start
section: defaults
version_added: '2.9'
check_mode_markers:
name: Show markers when running in check mode
description:
- Toggle to control displaying markers when running in check mode.
- "The markers are C(DRY RUN) at the beginning and ending of playbook execution (when calling C(ansible-playbook --check))
and C(CHECK MODE) as a suffix at every play and task that is run in check mode."
type: bool
default: no
version_added: '2.9'
env:
- name: ANSIBLE_CHECK_MODE_MARKERS
ini:
- key: check_mode_markers
section: defaults
show_task_path_on_failure:
name: Show file path on failed tasks
description:
When a task fails, display the path to the file containing the failed task and the line number.
This information is displayed automatically for every task when running with C(-vv) or greater verbosity.
type: bool
default: no
env:
- name: ANSIBLE_SHOW_TASK_PATH_ON_FAILURE
ini:
- key: show_task_path_on_failure
section: defaults
version_added: '2.11'
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,397 |
Remove deprecated NETWORK_GROUP_MODULES.env.0
|
### Summary
The config option `NETWORK_GROUP_MODULES.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77397
|
https://github.com/ansible/ansible/pull/77428
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
|
a421a38e1068924478535f4aa3b27fd566dc452e
| 2022-03-29T21:34:09Z |
python
| 2022-03-31T19:11:42Z |
changelogs/fragments/77397-remove-network_group_modules.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,397 |
Remove deprecated NETWORK_GROUP_MODULES.env.0
|
### Summary
The config option `NETWORK_GROUP_MODULES.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77397
|
https://github.com/ansible/ansible/pull/77428
|
f2387537b6ab8fca0d82a7aca41541dcdbc97894
|
a421a38e1068924478535f4aa3b27fd566dc452e
| 2022-03-29T21:34:09Z |
python
| 2022-03-31T19:11:42Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- python2.6
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
changelogs/fragments/77398-remove-plugin_filters_cfg-default.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
# TODO: deprecate in 2.14
#deprecated:
# # TODO: when removing set playbook/play.py to default=None
# why: the module_defaults keyword is a more generic version and can apply to all calls to the
# M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
# version: "2.18"
# alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- python2.6
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
test/integration/targets/plugin_filtering/filter_lookup.ini
|
[default]
retry_files_enabled = False
plugin_filters_cfg = ./filter_lookup.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
test/integration/targets/plugin_filtering/filter_modules.ini
|
[default]
retry_files_enabled = False
plugin_filters_cfg = ./filter_modules.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
test/integration/targets/plugin_filtering/filter_ping.ini
|
[default]
retry_files_enabled = False
plugin_filters_cfg = ./filter_ping.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
test/integration/targets/plugin_filtering/filter_stat.ini
|
[default]
retry_files_enabled = False
plugin_filters_cfg = ./filter_stat.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,398 |
Remove deprecated PLUGIN_FILTERS_CFG.ini.0
|
### Summary
The config option `PLUGIN_FILTERS_CFG.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.12.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.14
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/77398
|
https://github.com/ansible/ansible/pull/77429
|
a421a38e1068924478535f4aa3b27fd566dc452e
|
d4dd4a82c0c091f24808c57443a4551acc95ad7f
| 2022-03-29T21:34:11Z |
python
| 2022-03-31T19:11:56Z |
test/integration/targets/plugin_filtering/no_filters.ini
|
[default]
retry_files_enabled = False
plugin_filters_cfg = ./empty.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,437 |
Add yaml integration test to verify missing cyaml works
|
### Summary
As a follow on to https://github.com/ansible/ansible/pull/77434 we should create an integration test that installs pyyaml in a venv without using the wheel to verify that everything works as expected without the libyaml C extension.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/common/yaml.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77437
|
https://github.com/ansible/ansible/pull/77446
|
ac56647f4a96ae3b8e18d59763d5b68a142d00e2
|
2797dc644aa8c809444ccac64cad63e0d9a3f9fe
| 2022-03-31T19:20:39Z |
python
| 2022-04-01T20:53:19Z |
test/integration/targets/pyyaml/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,437 |
Add yaml integration test to verify missing cyaml works
|
### Summary
As a follow on to https://github.com/ansible/ansible/pull/77434 we should create an integration test that installs pyyaml in a venv without using the wheel to verify that everything works as expected without the libyaml C extension.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/common/yaml.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77437
|
https://github.com/ansible/ansible/pull/77446
|
ac56647f4a96ae3b8e18d59763d5b68a142d00e2
|
2797dc644aa8c809444ccac64cad63e0d9a3f9fe
| 2022-03-31T19:20:39Z |
python
| 2022-04-01T20:53:19Z |
test/integration/targets/pyyaml/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,767 |
Clarify ansible-galaxy collection install warning message
|
### Summary
```(venv) ➜ ansible-navigator git:(main) ansible-galaxy collection install cisco.ios -p ./collections
Starting galaxy collection install process
[WARNING]: The specified collections path '/Users/bthornto/github/ansible-navigator/collections' is not part of the configured Ansible collections paths '/Users/bthornto/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be
picked up in an Ansible run.
```
It will be picked up in an Ansible run, in fact playbook adjacent collections are fully Supported.
Can `, unless within a playbook adjacent collections directory` be added to the message?
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.0]
config file = None
configured module search path = ['/Users/bthornto/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bthornto/github/ansible-navigator/venv/lib/python3.8/site-packages/ansible
ansible collection location = /Users/bthornto/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/bthornto/github/ansible-navigator/venv/bin/ansible
python version = 3.8.2 (default, Dec 21 2020, 15:06:04) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.0
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
all
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```shell
(venv) ➜ ansible-navigator git:(main) ansible-galaxy collection install cisco.ios -p ./collections
Starting galaxy collection install process
[WARNING]: The specified collections path '/Users/bthornto/github/ansible-navigator/collections' is not part of the configured Ansible collections paths '/Users/bthornto/.ansible/collections:/usr/share/ansible/collections'. The installed collection won't be
picked up in an Ansible run.
```
### Expected Results
warning messages indicates that if the installation directory is playbook adjacent and called collections it will be picked up by ansible while running the playbook
### Actual Results
```console
received somewhat invalid warning message
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74767
|
https://github.com/ansible/ansible/pull/74859
|
82c622a26a8a05a675a83f629eb6ef4da48afacb
|
3fe5f577e6bf944c2927f77eb687e0930c21643f
| 2021-05-19T12:40:30Z |
python
| 2022-04-04T20:40:02Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import re
import shutil
import sys
import textwrap
import time
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('v3', False, 'bool'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
]
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
artifacts_manager_kwargs = {'validate_certs': not context.CLIARGS['ignore_certs']}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError("{value} is not a valid signature count value")
return value
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self._api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=AnsibleCollectionConfig.collection_paths,
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
if self._implicit_role and ('-r' in self._raw_args or '--role-file' in self._raw_args):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
validate_certs_fallback = not context.CLIARGS['ignore_certs']
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
client_id = server_options.pop('client_id', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
available_api_versions = None
v3 = server_options.pop('v3', None)
validate_certs = server_options['validate_certs']
if validate_certs is None:
validate_certs = validate_certs_fallback
server_options['validate_certs'] = validate_certs
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs_fallback,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs_fallback,
**galaxy_options
))
return context.CLIARGS['func']()
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=not context.CLIARGS['ignore_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
output_format = context.CLIARGS['output_format']
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = AnsibleCollectionConfig.collection_paths
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
try:
collection = Requirement.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
six.raise_from(AnsibleError(val_err), val_err)
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver}
}
continue
fqcn_width, version_width = _get_collection_widths([collection])
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = list(find_existing_collections(
collection_path, artifacts_manager,
))
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
for collection in sorted(collections, key=to_text):
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,433 |
Remove advanced example which does not work.
|
### Summary
Please just remove:
```yaml
vars:
myvarnames: "{{ q('varnames', '^my') }}"
mydict: "{{ dict(myvarnames | zip(q('vars', *myvarnames))) }}"
```
https://docs.ansible.com/ansible/latest/user_guide/complex_data_manipulation.html#id15
No tests exist for this and a simple copy paste is producing some sort of DoS-crash.
There is no additional reference where this was ever used or tested.
Original:
https://github.com/ansible/ansible/pull/46979/files#diff-0b3f5962e2d097f45656021c9579639763d56e82a67bd97d37694027e2f686d9R205-R213
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/complex_data_manipulation.rst
### Ansible Version
```console
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/robert.rettig/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/robert.rettig/.ansible/collections:/usr/share/ansible/collections
executable location = /bin/ansible
python version = 3.8.12 (default, Sep 21 2021, 00:10:52) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
CentOS Stream 8
### Additional Information
`ansible-playbook debug.yml`
with _debug.yml_:
```yaml
- become: no
connection: local
gather_facts: no
hosts: localhost
tasks:
- debug:
var: mydict
vars:
myvarnames: "{{ q('varnames', '^my') }}"
mydict: "{{ dict(myvarnames | zip(q('vars', *myvarnames))) }}"
```
error:
```txt
...
original message: recursive loop detected in template string: {{ dict(myvarnames | zip(q('vars', *myvarnames))) }}"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77433
|
https://github.com/ansible/ansible/pull/77436
|
a85261342acf2e9a0d7e9dc792ef1edadf32e4b2
|
d7d2afd2031618bc3614b48c4d4be5aa409e0155
| 2022-03-31T17:19:30Z |
python
| 2022-04-07T17:44:05Z |
docs/docsite/rst/user_guide/complex_data_manipulation.rst
|
.. _complex_data_manipulation:
Data manipulation
#########################
In many cases, you need to do some complex operation with your variables, while Ansible is not recommended as a data processing/manipulation tool, you can use the existing Jinja2 templating in conjunction with the many added Ansible filters, lookups and tests to do some very complex transformations.
Let's start with a quick definition of each type of plugin:
- lookups: Mainly used to query 'external data', in Ansible these were the primary part of loops using the ``with_<lookup>`` construct, but they can be used independently to return data for processing. They normally return a list due to their primary function in loops as mentioned previously. Used with the ``lookup`` or ``query`` Jinja2 operators.
- filters: used to change/transform data, used with the ``|`` Jinja2 operator.
- tests: used to validate data, used with the ``is`` Jinja2 operator.
.. _note:
* Some tests and filters are provided directly by Jinja2, so their availability depends on the Jinja2 version, not Ansible.
.. _for_loops_or_list_comprehensions:
Loops and list comprehensions
=============================
Most programming languages have loops (``for``, ``while``, and so on) and list comprehensions to do transformations on lists including lists of objects. Jinja2 has a few filters that provide this functionality: ``map``, ``select``, ``reject``, ``selectattr``, ``rejectattr``.
- map: this is a basic for loop that just allows you to change every item in a list, using the 'attribute' keyword you can do the transformation based on attributes of the list elements.
- select/reject: this is a for loop with a condition, that allows you to create a subset of a list that matches (or not) based on the result of the condition.
- selectattr/rejectattr: very similar to the above but it uses a specific attribute of the list elements for the conditional statement.
.. _exponential_backoff:
Use a loop to create exponential backoff for retries/until.
.. code-block:: yaml
- name: retry ping 10 times with exponential backoff delay
ping:
retries: 10
delay: '{{item|int}}'
loop: '{{ range(1, 10)|map('pow', 2) }}'
.. _keys_from_dict_matching_list:
Extract keys from a dictionary matching elements from a list
------------------------------------------------------------
The Python equivalent code would be:
.. code-block:: python
chains = [1, 2]
for chain in chains:
for config in chains_config[chain]['configs']:
print(config['type'])
There are several ways to do it in Ansible, this is just one example:
.. code-block:: YAML+Jinja
:emphasize-lines: 4
:caption: Way to extract matching keys from a list of dictionaries
tasks:
- name: Show extracted list of keys from a list of dictionaries
ansible.builtin.debug:
msg: "{{ chains | map('extract', chains_config) | map(attribute='configs') | flatten | map(attribute='type') | flatten }}"
vars:
chains: [1, 2]
chains_config:
1:
foo: bar
configs:
- type: routed
version: 0.1
- type: bridged
version: 0.2
2:
foo: baz
configs:
- type: routed
version: 1.0
- type: bridged
version: 1.1
.. code-block:: ansible-output
:caption: Results of debug task, a list with the extracted keys
ok: [localhost] => {
"msg": [
"routed",
"bridged",
"routed",
"bridged"
]
}
.. code-block:: YAML+Jinja
:caption: Get the unique list of values of a variable that vary per host
vars:
unique_value_list: "{{ groups['all'] | map ('extract', hostvars, 'varname') | list | unique}}"
.. _find_mount_point:
Find mount point
----------------
In this case, we want to find the mount point for a given path across our machines, since we already collect mount facts, we can use the following:
.. code-block:: YAML+Jinja
:caption: Use selectattr to filter mounts into list I can then sort and select the last from
:emphasize-lines: 8
- hosts: all
gather_facts: True
vars:
path: /var/lib/cache
tasks:
- name: The mount point for {{path}}, found using the Ansible mount facts, [-1] is the same as the 'last' filter
ansible.builtin.debug:
msg: "{{(ansible_facts.mounts | selectattr('mount', 'in', path) | list | sort(attribute='mount'))[-1]['mount']}}"
.. _omit_elements_from_list:
Omit elements from a list
-------------------------
The special ``omit`` variable ONLY works with module options, but we can still use it in other ways as an identifier to tailor a list of elements:
.. code-block:: YAML+Jinja
:caption: Inline list filtering when feeding a module option
:emphasize-lines: 3, 6
- name: Enable a list of Windows features, by name
ansible.builtin.set_fact:
win_feature_list: "{{ namestuff | reject('equalto', omit) | list }}"
vars:
namestuff:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
Another way is to avoid adding elements to the list in the first place, so you can just use it directly:
.. code-block:: YAML+Jinja
:caption: Using set_fact in a loop to increment a list conditionally
:emphasize-lines: 3, 4, 6
- name: Build unique list with some items conditionally omitted
ansible.builtin.set_fact:
namestuff: ' {{ (namestuff | default([])) | union([item]) }}'
when: item != omit
loop:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
.. _combine_optional_values:
Combine values from same list of dicts
---------------------------------------
Combining positive and negative filters from examples above, you can get a 'value when it exists' and a 'fallback' when it doesn't.
.. code-block:: YAML+Jinja
:caption: Use selectattr and rejectattr to get the ansible_host or inventory_hostname as needed
- hosts: localhost
tasks:
- name: Check hosts in inventory that respond to ssh port
wait_for:
host: "{{ item }}"
port: 22
loop: '{{ has_ah + no_ah }}'
vars:
has_ah: '{{ hostvars|dictsort|selectattr("1.ansible_host", "defined")|map(attribute="1.ansible_host")|list }}'
no_ah: '{{ hostvars|dictsort|rejectattr("1.ansible_host", "defined")|map(attribute="0")|list }}'
.. _custom_fileglob_variable:
Custom Fileglob Based on a Variable
-----------------------------------
This example uses `Python argument list unpacking <https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists>`_ to create a custom list of fileglobs based on a variable.
.. code-block:: YAML+Jinja
:caption: Using fileglob with a list based on a variable.
- hosts: all
vars:
mygroups
- prod
- web
tasks:
- name: Copy a glob of files based on a list of groups
copy:
src: "{{ item }}"
dest: "/tmp/{{ item }}"
loop: '{{ q("fileglob", *globlist) }}'
vars:
globlist: '{{ mygroups | map("regex_replace", "^(.*)$", "files/\1/*.conf") | list }}'
.. _complex_type_transformations:
Complex Type transformations
=============================
Jinja provides filters for simple data type transformations (``int``, ``bool``, and so on), but when you want to transform data structures things are not as easy.
You can use loops and list comprehensions as shown above to help, also other filters and lookups can be chained and used to achieve more complex transformations.
.. _create_dictionary_from_list:
Create dictionary from list
---------------------------
In most languages it is easy to create a dictionary (a.k.a. map/associative array/hash and so on) from a list of pairs, in Ansible there are a couple of ways to do it and the best one for you might depend on the source of your data.
These example produces ``{"a": "b", "c": "d"}``
.. code-block:: YAML+Jinja
:caption: Simple list to dict by assuming the list is [key, value , key, value, ...]
vars:
single_list: [ 'a', 'b', 'c', 'd' ]
mydict: "{{ dict(single_list | slice(2)) }}"
.. code-block:: YAML+Jinja
:caption: It is simpler when we have a list of pairs:
vars:
list_of_pairs: [ ['a', 'b'], ['c', 'd'] ]
mydict: "{{ dict(list_of_pairs) }}"
Both end up being the same thing, with ``slice(2)`` transforming ``single_list`` to a ``list_of_pairs`` generator.
A bit more complex, using ``set_fact`` and a ``loop`` to create/update a dictionary with key value pairs from 2 lists:
.. code-block:: YAML+Jinja
:caption: Using set_fact to create a dictionary from a set of lists
:emphasize-lines: 3, 4
- name: Uses 'combine' to update the dictionary and 'zip' to make pairs of both lists
ansible.builtin.set_fact:
mydict: "{{ mydict | default({}) | combine({item[0]: item[1]}) }}"
loop: "{{ (keys | zip(values)) | list }}"
vars:
keys:
- foo
- var
- bar
values:
- a
- b
- c
This results in ``{"foo": "a", "var": "b", "bar": "c"}``.
You can even combine these simple examples with other filters and lookups to create a dictionary dynamically by matching patterns to variable names:
.. code-block:: YAML+Jinja
:caption: Using 'vars' to define dictionary from a set of lists without needing a task
vars:
myvarnames: "{{ q('varnames', '^my') }}"
mydict: "{{ dict(myvarnames | zip(q('vars', *myvarnames))) }}"
A quick explanation, since there is a lot to unpack from these two lines:
- The ``varnames`` lookup returns a list of variables that match "begin with ``my``".
- Then feeding the list from the previous step into the ``vars`` lookup to get the list of values.
The ``*`` is used to 'dereference the list' (a pythonism that works in Jinja), otherwise it would take the list as a single argument.
- Both lists get passed to the ``zip`` filter to pair them off into a unified list (key, value, key2, value2, ...).
- The dict function then takes this 'list of pairs' to create the dictionary.
An example on how to use facts to find a host's data that meets condition X:
.. code-block:: YAML+Jinja
vars:
uptime_of_host_most_recently_rebooted: "{{ansible_play_hosts_all | map('extract', hostvars, 'ansible_uptime_seconds') | sort | first}}"
An example to show a host uptime in days/hours/minutes/seconds (assumes facts where gathered).
.. code-block:: YAML+Jinja
- name: Show the uptime in days/hours/minutes/seconds
ansible.builtin.debug:
msg: Uptime {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
.. seealso::
:ref:`playbooks_filters`
Jinja2 filters included with Ansible
:ref:`playbooks_tests`
Jinja2 tests included with Ansible
`Jinja2 Docs <https://jinja.palletsprojects.com/>`_
Jinja2 documentation, includes lists for core filters and tests
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,479 |
ansible-galaxy [core 2.12.2] unable to install collection from git+ssh - No such file or directory: 'git' error
|
### Summary
Attempting to install collections from bitbucket onto a shared ansible host for my team. when running the install command I get a No such file or directory: 'git' error.
I have to use git+ssh for multiple reasons, primary being ssh access to our bitbucket is hosted on a non-standard port.
This is working on a Ubuntu 20.04 focal WSL environment with the following configuration:
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
However, I cannot get this to work on a Debian 11 Bullseye server with the same core version but newer python version.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No changes reported
```
### OS / Environment
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
### Steps to Reproduce
```
ansible-galaxy -vvv collection install git+ssh://[email protected]:7999/my_namespace/my_collections.git -p /my/collections/path/
```
### Expected Results
ansible-galaxy installs the collections from the git repo over ssh to the specified destination.
### Actual Results
```console
$ ansible-galaxy -vvv collection install git+ssh://[email protected]:7999/my_namespace/my_collections.git -p -p /my/collections/path/
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from ansible-core in version 2.13. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: 'git'
the full traceback was:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 241, in get_direct_collection_meta
return self._artifact_meta_cache[collection.src]
KeyError: 'git+ssh://[email protected]:7999/my_namespace/my_collections.git'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 569, in run
return context.CLIARGS['func']()
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 1149, in execute_install
requirements = self._require_one_of_collections_requirements(
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 828, in _require_one_of_collections_requirements
'collections': [
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 829, in <listcomp>
Requirement.from_string(coll_input, artifacts_manager)
File "/usr/lib/python3/dist-packages/ansible/galaxy/dependency_resolution/dataclasses.py", line 195, in from_string
return cls.from_requirement_dict(req, artifacts_manager)
File "/usr/lib/python3/dist-packages/ansible/galaxy/dependency_resolution/dataclasses.py", line 321, in from_requirement_dict
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 230, in get_direct_collection_version
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 243, in get_direct_collection_meta
b_artifact_path = self.get_artifact_path(collection)
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 186, in get_artifact_path
b_artifact_path = _extract_collection_from_git(
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 359, in _extract_collection_from_git
subprocess.check_call(git_clone_cmd)
File "/usr/lib/python3.9/subprocess.py", line 368, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python3.9/subprocess.py", line 349, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.9/subprocess.py", line 1823, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'git'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77479
|
https://github.com/ansible/ansible/pull/77493
|
eef0a1cef986ac5e92a5ae00b6b60ba7be5c77d9
|
477c55b0d231dcb62cfcec21fe93a8fee95b4215
| 2022-04-06T17:58:18Z |
python
| 2022-04-08T17:41:53Z |
changelogs/fragments/77493-ansible-galaxy-find-git-executable-before-using.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,479 |
ansible-galaxy [core 2.12.2] unable to install collection from git+ssh - No such file or directory: 'git' error
|
### Summary
Attempting to install collections from bitbucket onto a shared ansible host for my team. when running the install command I get a No such file or directory: 'git' error.
I have to use git+ssh for multiple reasons, primary being ssh access to our bitbucket is hosted on a non-standard port.
This is working on a Ubuntu 20.04 focal WSL environment with the following configuration:
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
However, I cannot get this to work on a Debian 11 Bullseye server with the same core version but newer python version.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No changes reported
```
### OS / Environment
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
### Steps to Reproduce
```
ansible-galaxy -vvv collection install git+ssh://[email protected]:7999/my_namespace/my_collections.git -p /my/collections/path/
```
### Expected Results
ansible-galaxy installs the collections from the git repo over ssh to the specified destination.
### Actual Results
```console
$ ansible-galaxy -vvv collection install git+ssh://[email protected]:7999/my_namespace/my_collections.git -p -p /my/collections/path/
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from ansible-core in version 2.13. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: 'git'
the full traceback was:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 241, in get_direct_collection_meta
return self._artifact_meta_cache[collection.src]
KeyError: 'git+ssh://[email protected]:7999/my_namespace/my_collections.git'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 569, in run
return context.CLIARGS['func']()
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 1149, in execute_install
requirements = self._require_one_of_collections_requirements(
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 828, in _require_one_of_collections_requirements
'collections': [
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 829, in <listcomp>
Requirement.from_string(coll_input, artifacts_manager)
File "/usr/lib/python3/dist-packages/ansible/galaxy/dependency_resolution/dataclasses.py", line 195, in from_string
return cls.from_requirement_dict(req, artifacts_manager)
File "/usr/lib/python3/dist-packages/ansible/galaxy/dependency_resolution/dataclasses.py", line 321, in from_requirement_dict
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 230, in get_direct_collection_version
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 243, in get_direct_collection_meta
b_artifact_path = self.get_artifact_path(collection)
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 186, in get_artifact_path
b_artifact_path = _extract_collection_from_git(
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 359, in _extract_collection_from_git
subprocess.check_call(git_clone_cmd)
File "/usr/lib/python3.9/subprocess.py", line 368, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python3.9/subprocess.py", line 349, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.9/subprocess.py", line 1823, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'git'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77479
|
https://github.com/ansible/ansible/pull/77493
|
eef0a1cef986ac5e92a5ae00b6b60ba7be5c77d9
|
477c55b0d231dcb62cfcec21fe93a8fee95b4215
| 2022-04-06T17:58:18Z |
python
| 2022-04-08T17:41:53Z |
lib/ansible/galaxy/collection/concrete_artifact_manager.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Concrete collection candidate management helper module."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import subprocess
import typing as t
from contextlib import contextmanager
from hashlib import sha256
from urllib.error import URLError
from urllib.parse import urldefrag
from shutil import rmtree
from tempfile import mkdtemp
if t.TYPE_CHECKING:
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement,
)
from ansible.galaxy.token import GalaxyToken
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.dependency_resolution.dataclasses import _GALAXY_YAML
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.six import raise_from
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
import yaml
display = Display()
MANIFEST_FILENAME = 'MANIFEST.json'
class ConcreteArtifactsManager:
"""Manager for on-disk collection artifacts.
It is responsible for:
* downloading remote collections from Galaxy-compatible servers and
direct links to tarballs or SCM repositories
* keeping track of local ones
* keeping track of Galaxy API tokens for downloads from Galaxy'ish
as well as the artifact hashes
* keeping track of Galaxy API signatures for downloads from Galaxy'ish
* caching all of above
* retrieving the metadata out of the downloaded artifacts
"""
def __init__(self, b_working_directory, validate_certs=True, keyring=None, timeout=60, required_signature_count=None, ignore_signature_errors=None):
# type: (bytes, bool, str, int, str, list[str]) -> None
"""Initialize ConcreteArtifactsManager caches and costraints."""
self._validate_certs = validate_certs # type: bool
self._artifact_cache = {} # type: dict[bytes, bytes]
self._galaxy_artifact_cache = {} # type: dict[Candidate | Requirement, bytes]
self._artifact_meta_cache = {} # type: dict[bytes, dict[str, str | list[str] | dict[str, str] | None]]
self._galaxy_collection_cache = {} # type: dict[Candidate | Requirement, tuple[str, str, GalaxyToken]]
self._galaxy_collection_origin_cache = {} # type: dict[Candidate, tuple[str, list[dict[str, str]]]]
self._b_working_directory = b_working_directory # type: bytes
self._supplemental_signature_cache = {} # type: dict[str, str]
self._keyring = keyring # type: str
self.timeout = timeout # type: int
self._required_signature_count = required_signature_count # type: str
self._ignore_signature_errors = ignore_signature_errors # type: list[str]
@property
def keyring(self):
return self._keyring
@property
def required_successful_signature_count(self):
return self._required_signature_count
@property
def ignore_signature_errors(self):
if self._ignore_signature_errors is None:
return []
return self._ignore_signature_errors
def get_galaxy_artifact_source_info(self, collection):
# type: (Candidate) -> dict[str, str | list[dict[str, str]]]
server = collection.src.api_server
try:
download_url = self._galaxy_collection_cache[collection][0]
signatures_url, signatures = self._galaxy_collection_origin_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
return {
"format_version": "1.0.0",
"namespace": collection.namespace,
"name": collection.name,
"version": collection.ver,
"server": server,
"version_url": signatures_url,
"download_url": download_url,
"signatures": signatures,
}
def get_galaxy_artifact_path(self, collection):
# type: (Candidate | Requirement) -> bytes
"""Given a Galaxy-stored collection, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._galaxy_artifact_cache[collection]
except KeyError:
pass
try:
url, sha256_hash, token = self._galaxy_collection_cache[collection]
except KeyError as key_err:
raise_from(
RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
),
key_err,
)
display.vvvv(
"Fetching a collection tarball for '{collection!s}' from "
'Ansible Galaxy'.format(collection=collection),
)
try:
b_artifact_path = _download_file(
url,
self._b_working_directory,
expected_hash=sha256_hash,
validate_certs=self._validate_certs,
token=token,
) # type: bytes
except URLError as err:
raise_from(
AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
),
err,
)
else:
display.vvv(
"Collection '{coll!s}' obtained from "
'server {server!s} {url!s}'.format(
coll=collection, server=collection.src or 'Galaxy',
url=collection.src.api_server if collection.src is not None
else '',
)
)
self._galaxy_artifact_cache[collection] = b_artifact_path
return b_artifact_path
def get_artifact_path(self, collection):
# type: (Candidate | Requirement) -> bytes
"""Given a concrete collection pointer, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._artifact_cache[collection.src]
except KeyError:
pass
# NOTE: SCM needs to be special-cased as it may contain either
# NOTE: one collection in its root, or a number of top-level
# NOTE: collection directories instead.
# NOTE: The idea is to store the SCM collection as unpacked
# NOTE: directory structure under the temporary location and use
# NOTE: a "virtual" collection that has pinned requirements on
# NOTE: the directories under that SCM checkout that correspond
# NOTE: to collections.
# NOTE: This brings us to the idea that we need two separate
# NOTE: virtual Requirement/Candidate types --
# NOTE: (single) dir + (multidir) subdirs
if collection.is_url:
display.vvvv(
"Collection requirement '{collection!s}' is a URL "
'to a tar artifact'.format(collection=collection.fqcn),
)
try:
b_artifact_path = _download_file(
collection.src,
self._b_working_directory,
expected_hash=None, # NOTE: URLs don't support checksums
validate_certs=self._validate_certs,
timeout=self.timeout
)
except URLError as err:
raise_from(
AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
),
err,
)
elif collection.is_scm:
b_artifact_path = _extract_collection_from_git(
collection.src,
collection.ver,
self._b_working_directory,
)
elif collection.is_file or collection.is_dir or collection.is_subdirs:
b_artifact_path = to_bytes(collection.src)
else:
# NOTE: This may happen `if collection.is_online_index_pointer`
raise RuntimeError(
'The artifact is of an unexpected type {art_type!s}'.
format(art_type=collection.type)
)
self._artifact_cache[collection.src] = b_artifact_path
return b_artifact_path
def _get_direct_collection_namespace(self, collection):
# type: (Candidate) -> str | None
return self.get_direct_collection_meta(collection)['namespace'] # type: ignore[return-value]
def _get_direct_collection_name(self, collection):
# type: (Candidate) -> str | None
return self.get_direct_collection_meta(collection)['name'] # type: ignore[return-value]
def get_direct_collection_fqcn(self, collection):
# type: (Candidate) -> str | None
"""Extract FQCN from the given on-disk collection artifact.
If the collection is virtual, ``None`` is returned instead
of a string.
"""
if collection.is_virtual:
# NOTE: should it be something like "<virtual>"?
return None
return '.'.join(( # type: ignore[type-var]
self._get_direct_collection_namespace(collection), # type: ignore[arg-type]
self._get_direct_collection_name(collection),
))
def get_direct_collection_version(self, collection):
# type: (Candidate | Requirement) -> str
"""Extract version from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
def get_direct_collection_dependencies(self, collection):
# type: (Candidate | Requirement) -> dict[str, str]
"""Extract deps from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['dependencies'] # type: ignore[return-value]
def get_direct_collection_meta(self, collection):
# type: (Candidate | Requirement) -> dict[str, str | dict[str, str] | list[str] | None]
"""Extract meta from the given on-disk collection artifact."""
try: # FIXME: use unique collection identifier as a cache key?
return self._artifact_meta_cache[collection.src]
except KeyError:
b_artifact_path = self.get_artifact_path(collection)
if collection.is_url or collection.is_file:
collection_meta = _get_meta_from_tar(b_artifact_path)
elif collection.is_dir: # should we just build a coll instead?
# FIXME: what if there's subdirs?
try:
collection_meta = _get_meta_from_dir(b_artifact_path)
except LookupError as lookup_err:
raise_from(
AnsibleError(
'Failed to find the collection dir deps: {err!s}'.
format(err=to_native(lookup_err)),
),
lookup_err,
)
elif collection.is_scm:
collection_meta = {
'name': None,
'namespace': None,
'dependencies': {to_native(b_artifact_path): '*'},
'version': '*',
}
elif collection.is_subdirs:
collection_meta = {
'name': None,
'namespace': None,
# NOTE: Dropping b_artifact_path since it's based on src anyway
'dependencies': dict.fromkeys(
map(to_native, collection.namespace_collection_paths),
'*',
),
'version': '*',
}
else:
raise RuntimeError
self._artifact_meta_cache[collection.src] = collection_meta
return collection_meta
def save_collection_source(self, collection, url, sha256_hash, token, signatures_url, signatures):
# type: (Candidate, str, str, GalaxyToken, str, list[dict[str, str]]) -> None
"""Store collection URL, SHA256 hash and Galaxy API token.
This is a hook that is supposed to be called before attempting to
download Galaxy-based collections with ``get_galaxy_artifact_path()``.
"""
self._galaxy_collection_cache[collection] = url, sha256_hash, token
self._galaxy_collection_origin_cache[collection] = signatures_url, signatures
@classmethod
@contextmanager
def under_tmpdir(
cls,
temp_dir_base, # type: str
validate_certs=True, # type: bool
keyring=None, # type: str
required_signature_count=None, # type: str
ignore_signature_errors=None, # type: list[str]
): # type: (...) -> t.Iterator[ConcreteArtifactsManager]
"""Custom ConcreteArtifactsManager constructor with temp dir.
This method returns a context manager that allocates and cleans
up a temporary directory for caching the collection artifacts
during the dependency resolution process.
"""
# NOTE: Can't use `with tempfile.TemporaryDirectory:`
# NOTE: because it's not in Python 2 stdlib.
temp_path = mkdtemp(
dir=to_bytes(temp_dir_base, errors='surrogate_or_strict'),
)
b_temp_path = to_bytes(temp_path, errors='surrogate_or_strict')
try:
yield cls(
b_temp_path,
validate_certs,
keyring=keyring,
required_signature_count=required_signature_count,
ignore_signature_errors=ignore_signature_errors
)
finally:
rmtree(b_temp_path)
def parse_scm(collection, version):
"""Extract name, version, path and subdir out of the SCM pointer."""
if ',' in collection:
collection, version = collection.split(',', 1)
elif version == '*' or not version:
version = 'HEAD'
if collection.startswith('git+'):
path = collection[4:]
else:
path = collection
path, fragment = urldefrag(path)
fragment = fragment.strip(os.path.sep)
if path.endswith(os.path.sep + '.git'):
name = path.split(os.path.sep)[-2]
elif '://' not in path and '@' not in path:
name = path
else:
name = path.split('/')[-1]
if name.endswith('.git'):
name = name[:-4]
return name, version, path, fragment
def _extract_collection_from_git(repo_url, coll_ver, b_path):
name, version, git_url, fragment = parse_scm(repo_url, coll_ver)
b_checkout_path = mkdtemp(
dir=b_path,
prefix=to_bytes(name, errors='surrogate_or_strict'),
) # type: bytes
# Perform a shallow clone if simply cloning HEAD
if version == 'HEAD':
git_clone_cmd = 'git', 'clone', '--depth=1', git_url, to_text(b_checkout_path)
else:
git_clone_cmd = 'git', 'clone', git_url, to_text(b_checkout_path)
# FIXME: '--branch', version
try:
subprocess.check_call(git_clone_cmd)
except subprocess.CalledProcessError as proc_err:
raise_from(
AnsibleError( # should probably be LookupError
'Failed to clone a Git repository from `{repo_url!s}`.'.
format(repo_url=to_native(git_url)),
),
proc_err,
)
git_switch_cmd = 'git', 'checkout', to_text(version)
try:
subprocess.check_call(git_switch_cmd, cwd=b_checkout_path)
except subprocess.CalledProcessError as proc_err:
raise_from(
AnsibleError( # should probably be LookupError
'Failed to switch a cloned Git repo `{repo_url!s}` '
'to the requested revision `{commitish!s}`.'.
format(
commitish=to_native(version),
repo_url=to_native(git_url),
),
),
proc_err,
)
return (
os.path.join(b_checkout_path, to_bytes(fragment))
if fragment else b_checkout_path
)
# FIXME: use random subdirs while preserving the file names
def _download_file(url, b_path, expected_hash, validate_certs, token=None, timeout=60):
# type: (str, bytes, str | None, bool, GalaxyToken, int) -> bytes
# ^ NOTE: used in download and verify_collections ^
b_tarball_name = to_bytes(
url.rsplit('/', 1)[1], errors='surrogate_or_strict',
)
b_file_name = b_tarball_name[:-len('.tar.gz')]
b_tarball_dir = mkdtemp(
dir=b_path,
prefix=b'-'.join((b_file_name, b'')),
) # type: bytes
b_file_path = os.path.join(b_tarball_dir, b_tarball_name)
display.display("Downloading %s to %s" % (url, to_text(b_tarball_dir)))
# NOTE: Galaxy redirects downloads to S3 which rejects the request
# NOTE: if an Authorization header is attached so don't redirect it
resp = open_url(
to_native(url, errors='surrogate_or_strict'),
validate_certs=validate_certs,
headers=None if token is None else token.headers(),
unredirected_headers=['Authorization'], http_agent=user_agent(),
timeout=timeout
)
with open(b_file_path, 'wb') as download_file: # type: t.BinaryIO
actual_hash = _consume_file(resp, write_to=download_file)
if expected_hash:
display.vvvv(
'Validating downloaded file hash {actual_hash!s} with '
'expected hash {expected_hash!s}'.
format(actual_hash=actual_hash, expected_hash=expected_hash)
)
if expected_hash != actual_hash:
raise AnsibleError('Mismatch artifact hash with downloaded file')
return b_file_path
def _consume_file(read_from, write_to=None):
# type: (t.BinaryIO, t.BinaryIO) -> str
bufsize = 65536
sha256_digest = sha256()
data = read_from.read(bufsize)
while data:
if write_to is not None:
write_to.write(data)
write_to.flush()
sha256_digest.update(data)
data = read_from.read(bufsize)
return sha256_digest.hexdigest()
def _normalize_galaxy_yml_manifest(
galaxy_yml, # type: dict[str, str | list[str] | dict[str, str] | None]
b_galaxy_yml_path, # type: bytes
):
# type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
galaxy_yml_schema = (
get_collections_galaxy_meta_info()
) # type: list[dict[str, t.Any]] # FIXME: <--
# FIXME: 👆maybe precise type: list[dict[str, bool | str | list[str]]]
mandatory_keys = set()
string_keys = set() # type: set[str]
list_keys = set() # type: set[str]
dict_keys = set() # type: set[str]
for info in galaxy_yml_schema:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(list(mandatory_keys) + list(string_keys) + list(list_keys) + list(dict_keys))
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
raise AnsibleError("The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys))))
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val] # type: ignore[list-item]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
# NOTE: `version: null` is only allowed for `galaxy.yml`
# NOTE: and not `MANIFEST.json`. The use-case for it is collections
# NOTE: that generate the version from Git before building a
# NOTE: distributable tarball artifact.
if not galaxy_yml.get('version'):
galaxy_yml['version'] = '*'
return galaxy_yml
def _get_meta_from_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
try:
return _get_meta_from_installed_dir(b_path)
except LookupError:
return _get_meta_from_src_dir(b_path)
def _get_meta_from_src_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
galaxy_yml = os.path.join(b_path, _GALAXY_YAML)
if not os.path.isfile(galaxy_yml):
raise LookupError(
"The collection galaxy.yml path '{path!s}' does not exist.".
format(path=to_native(galaxy_yml))
)
with open(galaxy_yml, 'rb') as manifest_file_obj:
try:
manifest = yaml_load(manifest_file_obj)
except yaml.error.YAMLError as yaml_err:
raise_from(
AnsibleError(
"Failed to parse the galaxy.yml at '{path!s}' with "
'the following error:\n{err_txt!s}'.
format(
path=to_native(galaxy_yml),
err_txt=to_native(yaml_err),
),
),
yaml_err,
)
return _normalize_galaxy_yml_manifest(manifest, galaxy_yml)
def _get_json_from_installed_dir(
b_path, # type: bytes
filename, # type: str
): # type: (...) -> dict
b_json_filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
try:
with open(b_json_filepath, 'rb') as manifest_fd:
b_json_text = manifest_fd.read()
except (IOError, OSError):
raise LookupError(
"The collection {manifest!s} path '{path!s}' does not exist.".
format(
manifest=filename,
path=to_native(b_json_filepath),
)
)
manifest_txt = to_text(b_json_text, errors='surrogate_or_strict')
try:
manifest = json.loads(manifest_txt)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=filename),
)
return manifest
def _get_meta_from_installed_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
manifest = _get_json_from_installed_dir(b_path, MANIFEST_FILENAME)
collection_info = manifest['collection_info']
version = collection_info.get('version')
if not version:
raise AnsibleError(
u'Collection metadata file `{manifest_filename!s}` at `{meta_file!s}` is expected '
u'to have a valid SemVer version value but got {version!s}'.
format(
manifest_filename=MANIFEST_FILENAME,
meta_file=to_text(b_path),
version=to_text(repr(version)),
),
)
return collection_info
def _get_meta_from_tar(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
if not tarfile.is_tarfile(b_path):
raise AnsibleError(
"Collection artifact at '{path!s}' is not a valid tar file.".
format(path=to_native(b_path)),
)
with tarfile.open(b_path, mode='r') as collection_tar: # type: tarfile.TarFile
try:
member = collection_tar.getmember(MANIFEST_FILENAME)
except KeyError:
raise AnsibleError(
"Collection at '{path!s}' does not contain the "
'required file {manifest_file!s}.'.
format(
path=to_native(b_path),
manifest_file=MANIFEST_FILENAME,
),
)
with _tarfile_extract(collection_tar, member) as (_member, member_obj):
if member_obj is None:
raise AnsibleError(
'Collection tar file does not contain '
'member {member!s}'.format(member=MANIFEST_FILENAME),
)
text_content = to_text(
member_obj.read(),
errors='surrogate_or_strict',
)
try:
manifest = json.loads(text_content)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=MANIFEST_FILENAME),
)
return manifest['collection_info']
@contextmanager
def _tarfile_extract(
tar, # type: tarfile.TarFile
member, # type: tarfile.TarInfo
):
# type: (...) -> t.Iterator[tuple[tarfile.TarInfo, t.IO[bytes] | None]]
tar_obj = tar.extractfile(member)
try:
yield member, tar_obj
finally:
if tar_obj is not None:
tar_obj.close()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,479 |
ansible-galaxy [core 2.12.2] unable to install collection from git+ssh - No such file or directory: 'git' error
|
### Summary
Attempting to install collections from bitbucket onto a shared ansible host for my team. when running the install command I get a No such file or directory: 'git' error.
I have to use git+ssh for multiple reasons, primary being ssh access to our bitbucket is hosted on a non-standard port.
This is working on a Ubuntu 20.04 focal WSL environment with the following configuration:
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
However, I cannot get this to work on a Debian 11 Bullseye server with the same core version but newer python version.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No changes reported
```
### OS / Environment
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
### Steps to Reproduce
```
ansible-galaxy -vvv collection install git+ssh://[email protected]:7999/my_namespace/my_collections.git -p /my/collections/path/
```
### Expected Results
ansible-galaxy installs the collections from the git repo over ssh to the specified destination.
### Actual Results
```console
$ ansible-galaxy -vvv collection install git+ssh://[email protected]:7999/my_namespace/my_collections.git -p -p /my/collections/path/
[DEPRECATION WARNING]: Setting verbosity before the arg sub command is deprecated, set the verbosity after the sub command. This feature will be removed from ansible-core in version 2.13. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
ansible-galaxy [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/bmorris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/bmorris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: 'git'
the full traceback was:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 241, in get_direct_collection_meta
return self._artifact_meta_cache[collection.src]
KeyError: 'git+ssh://[email protected]:7999/my_namespace/my_collections.git'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 128, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 569, in run
return context.CLIARGS['func']()
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 86, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 1149, in execute_install
requirements = self._require_one_of_collections_requirements(
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 828, in _require_one_of_collections_requirements
'collections': [
File "/usr/lib/python3/dist-packages/ansible/cli/galaxy.py", line 829, in <listcomp>
Requirement.from_string(coll_input, artifacts_manager)
File "/usr/lib/python3/dist-packages/ansible/galaxy/dependency_resolution/dataclasses.py", line 195, in from_string
return cls.from_requirement_dict(req, artifacts_manager)
File "/usr/lib/python3/dist-packages/ansible/galaxy/dependency_resolution/dataclasses.py", line 321, in from_requirement_dict
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 230, in get_direct_collection_version
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 243, in get_direct_collection_meta
b_artifact_path = self.get_artifact_path(collection)
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 186, in get_artifact_path
b_artifact_path = _extract_collection_from_git(
File "/usr/lib/python3/dist-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 359, in _extract_collection_from_git
subprocess.check_call(git_clone_cmd)
File "/usr/lib/python3.9/subprocess.py", line 368, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python3.9/subprocess.py", line 349, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.9/subprocess.py", line 1823, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'git'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77479
|
https://github.com/ansible/ansible/pull/77493
|
eef0a1cef986ac5e92a5ae00b6b60ba7be5c77d9
|
477c55b0d231dcb62cfcec21fe93a8fee95b4215
| 2022-04-06T17:58:18Z |
python
| 2022-04-08T17:41:53Z |
test/units/galaxy/test_collection_install.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import json
import os
import pytest
import re
import shutil
import stat
import tarfile
import yaml
from io import BytesIO, StringIO
from mock import MagicMock, patch
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import collection, api, dependency_resolution
from ansible.galaxy.dependency_resolution.dataclasses import Candidate, Requirement
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils import context_objects as co
from ansible.utils.display import Display
class RequirementCandidates():
def __init__(self):
self.candidates = []
def func_wrapper(self, func):
def run(*args, **kwargs):
self.candidates = func(*args, **kwargs)
return self.candidates
return run
def call_galaxy_cli(args):
orig = co.GlobalCLIArgs._Singleton__instance
co.GlobalCLIArgs._Singleton__instance = None
try:
GalaxyCLI(args=['ansible-galaxy', 'collection'] + args).run()
finally:
co.GlobalCLIArgs._Singleton__instance = orig
def artifact_json(namespace, name, version, dependencies, server):
json_str = json.dumps({
'artifact': {
'filename': '%s-%s-%s.tar.gz' % (namespace, name, version),
'sha256': '2d76f3b8c4bab1072848107fb3914c345f71a12a1722f25c08f5d3f51f4ab5fd',
'size': 1234,
},
'download_url': '%s/download/%s-%s-%s.tar.gz' % (server, namespace, name, version),
'metadata': {
'namespace': namespace,
'name': name,
'dependencies': dependencies,
},
'version': version
})
return to_text(json_str)
def artifact_versions_json(namespace, name, versions, galaxy_api, available_api_versions=None):
results = []
available_api_versions = available_api_versions or {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
for version in versions:
results.append({
'href': '%s/api/%s/%s/%s/versions/%s/' % (galaxy_api.api_server, api_version, namespace, name, version),
'version': version,
})
if api_version == 'v2':
json_str = json.dumps({
'count': len(versions),
'next': None,
'previous': None,
'results': results
})
if api_version == 'v3':
response = {'meta': {'count': len(versions)},
'data': results,
'links': {'first': None,
'last': None,
'next': None,
'previous': None},
}
json_str = json.dumps(response)
return to_text(json_str)
def error_json(galaxy_api, errors_to_return=None, available_api_versions=None):
errors_to_return = errors_to_return or []
available_api_versions = available_api_versions or {}
response = {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
if api_version == 'v2':
assert len(errors_to_return) <= 1
if errors_to_return:
response = errors_to_return[0]
if api_version == 'v3':
response['errors'] = errors_to_return
json_str = json.dumps(response)
return to_text(json_str)
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(request, tmp_path_factory):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton_path = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
collection_path = os.path.join(test_dir, namespace, collection)
call_galaxy_cli(['init', '%s.%s' % (namespace, collection), '-c', '--init-path', test_dir,
'--collection-skeleton', skeleton_path])
dependencies = getattr(request, 'param', None)
if dependencies:
galaxy_yml = os.path.join(collection_path, 'galaxy.yml')
with open(galaxy_yml, 'rb+') as galaxy_obj:
existing_yaml = yaml.safe_load(galaxy_obj)
existing_yaml['dependencies'] = dependencies
galaxy_obj.seek(0)
galaxy_obj.write(to_bytes(yaml.safe_dump(existing_yaml)))
galaxy_obj.truncate()
# Create a file with +x in the collection so we can test the permissions
execute_path = os.path.join(collection_path, 'runme.sh')
with open(execute_path, mode='wb') as fd:
fd.write(b"echo hi")
os.chmod(execute_path, os.stat(execute_path).st_mode | stat.S_IEXEC)
call_galaxy_cli(['build', collection_path, '--output-path', test_dir])
collection_tar = os.path.join(test_dir, '%s-%s-0.1.0.tar.gz' % (namespace, collection))
return to_bytes(collection_path), to_bytes(collection_tar)
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com')
galaxy_api.get_collection_signatures = MagicMock(return_value=[])
return galaxy_api
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'commitish', False),
('https://github.com/org/repo,commitish', None, False),
('https://github.com/org/repo/,commitish', None, True),
('https://github.com/org/repo#,commitish', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
clone_cmd = ('git', 'clone', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == ('git', 'checkout', 'commitish')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'HEAD', False),
('https://github.com/org/repo,HEAD', None, False),
('https://github.com/org/repo/,HEAD', None, True),
('https://github.com/org/repo#,HEAD', None, False),
('https://github.com/org/repo', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd_shallow(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
shallow_clone_cmd = ('git', 'clone', '--depth=1', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == shallow_clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == ('git', 'checkout', 'HEAD')
def test_build_requirement_from_path(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == collection_artifact[0]
assert actual.ver == u'0.1.0'
@pytest.mark.parametrize('version', ['1.1.1', '1.1.0', '1.0.0'])
def test_build_requirement_from_path_with_manifest(version, collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': version,
'dependencies': {
'ansible_namespace.collection': '*'
}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == to_text(version)
def test_build_requirement_from_path_invalid_manifest(collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(b"not json")
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_artifact_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# a collection artifact should always contain a valid version
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = (
'^Collection metadata file `.*` at `.*` is expected to have a valid SemVer '
'version value but got {empty_unicode_string!r}$'.
format(empty_unicode_string=u'')
)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_requirement_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# version may be falsey/arbitrary strings for collections in development
manifest_path = os.path.join(collection_artifact[0], b'galaxy.yml')
metadata = {
'authors': ['Ansible'],
'readme': 'README.md',
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {},
}
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(yaml.safe_dump(metadata)))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == u'*'
def test_build_requirement_from_tar(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_requirement_dict({'name': to_text(collection_artifact[1])}, concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == to_text(collection_artifact[1])
assert actual.ver == u'0.1.0'
def test_build_requirement_from_tar_fail_not_tar(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
test_file = os.path.join(test_dir, b'fake.tar.gz')
with open(test_file, 'wb') as test_obj:
test_obj.write(b"\x00\x01\x02\x03")
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection artifact at '%s' is not a valid tar file." % to_native(test_file)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(test_file)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'files': [],
'format': 1,
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('FILES.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection at '%s' does not contain the required file MANIFEST.json." % to_native(tar_path)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_files(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'collection_info': {},
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
with pytest.raises(KeyError, match='namespace'):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_invalid_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = b"not a json"
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_name(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.1.9', '2.1.10']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_version_metadata = MagicMock(
namespace='namespace', name='collection',
version='2.1.10', artifact_sha256='', dependencies={}
)
monkeypatch.setattr(api.GalaxyAPI, 'get_collection_version_metadata', mock_version_metadata)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
collections = ['namespace.collection']
requirements_file = None
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', collections[0]])
requirements = cli._require_one_of_collections_requirements(
collections, requirements_file, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.ver == u'2.1.10'
assert actual.src == galaxy_server
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1-beta.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:2.0.1-beta.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:2.0.1-beta.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1-beta.1'
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1-beta.1')
def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.0.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
broken_server = copy.copy(galaxy_server)
broken_server.api_server = 'https://broken.com/'
mock_version_list = MagicMock()
mock_version_list.return_value = []
monkeypatch.setattr(broken_server, 'get_collection_versions', mock_version_list)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>1.0.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [broken_server, galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'1.0.3'
assert mock_version_list.call_count == 1
assert mock_version_list.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_missing(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.return_value = []
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n* namespace.collection:* (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 401, 'msg', {},
StringIO()), "error")
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "error (HTTP Code: 401, Message: msg)"
with pytest.raises(api.GalaxyError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, False, False, False)
def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.0', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:==2.0.0'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:==2.0.0'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.0'
assert [c.ver for c in matches.candidates] == [u'2.0.0']
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.0')
def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>=2.0.1,<2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>=2.0.1,<2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert [c.ver for c in matches.candidates] == [u'2.0.1']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1')
def test_build_requirement_from_name_multiple_version_results(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2', '2.0.3', '2.0.4', '2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.5'
# should be ordered latest to earliest
assert [c.ver for c in matches.candidates] == [u'2.0.5', u'2.0.4', u'2.0.3', u'2.0.1', u'2.0.0']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=2.0.5 (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info_return = [
api.CollectionVersionMetadata('parent', 'collection', '2.0.5', None, None, {'namespace.collection': '!=1.0.0'}, None, None),
api.CollectionVersionMetadata('namespace', 'collection', '1.0.0', None, None, {}, None, None),
]
mock_get_info = MagicMock(side_effect=mock_get_info_return)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(side_effect=[['2.0.5'], ['1.0.0']])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'parent.collection:2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['parent.collection:2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=1.0.0 (dependency of parent.collection:2.0.5)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_install_installed_collection(monkeypatch, tmp_path_factory, galaxy_server):
mock_installed_collections = MagicMock(return_value=[Candidate('namespace.collection', '1.2.3', None, 'dir', None)])
monkeypatch.setattr(collection, 'find_existing_collections', mock_installed_collections)
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.2.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(return_value=['1.2.3', '1.3.0'])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
cli.run()
expected = "Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`."
assert mock_display.mock_calls[1][1][0] == expected
def test_install_collection(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
collection_tar = collection_artifact[1]
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
output_path = os.path.join(os.path.split(collection_tar)[0])
collection_path = os.path.join(output_path, b'ansible_namespace', b'collection')
os.makedirs(os.path.join(collection_path, b'delete_me')) # Create a folder to verify the install cleans out the dir
candidate = Candidate('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)
collection.install(candidate, to_text(output_path), concrete_artifact_cm)
# Ensure the temp directory is empty, nothing is left behind
assert os.listdir(temp_path) == []
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'plugins')).st_mode) == 0o0755
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'README.md')).st_mode) == 0o0644
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'runme.sh')).st_mode) == 0o0755
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
def test_install_collection_with_download(galaxy_server, collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
shutil.rmtree(collection_path)
collections_dir = ('%s' % os.path.sep).join(to_text(collection_path).split('%s' % os.path.sep)[:-2])
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
mock_download = MagicMock()
mock_download.return_value = collection_tar
monkeypatch.setattr(concrete_artifact_cm, 'get_galaxy_artifact_path', mock_download)
req = Candidate('ansible_namespace.collection', '0.1.0', 'https://downloadme.com', 'galaxy', None)
collection.install(req, to_text(collections_dir), concrete_artifact_cm)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
assert mock_download.call_count == 1
assert mock_download.mock_calls[0][1][0].src == 'https://downloadme.com'
assert mock_download.mock_calls[0][1][0].type == 'galaxy'
def test_install_collections_from_tar(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
def test_install_collections_existing_without_force(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
assert os.path.isdir(collection_path)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'README.md', b'docs', b'galaxy.yml', b'playbooks', b'plugins', b'roles', b'runme.sh']
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 1
assert display_msgs[0] == 'Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`.'
for msg in display_msgs:
assert 'WARNING' not in msg
def test_install_missing_metadata_warning(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
for file in [b'MANIFEST.json', b'galaxy.yml']:
b_path = os.path.join(collection_path, file)
if os.path.isfile(b_path):
os.unlink(b_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert 'WARNING' in display_msgs[0]
# Makes sure we don't get stuck in some recursive loop
@pytest.mark.parametrize('collection_artifact', [
{'ansible_namespace.collection': '>=0.0.1'},
], indirect=True)
def test_install_collection_with_circular_dependency(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
assert display_msgs[3] == "ansible_namespace.collection:0.1.0 was installed successfully"
@pytest.mark.parametrize(
"signatures,required_successful_count,ignore_errors,expected_success",
[
([], 'all', [], True),
(["good_signature"], 'all', [], True),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
# This is expected to succeed because ignored does not increment failed signatures.
# "all" signatures is not a specific number, so all == no (non-ignored) signatures in this case.
([collection.gpg.GpgBadArmor(status='failed')], 'all', ["BADARMOR"], True),
([collection.gpg.GpgBadArmor(status='failed'), "good_signature"], 'all', ["BADARMOR"], True),
([], '+all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], '+all', ["BADARMOR"], False),
([], '1', [], True),
([], '+1', [], False),
(["good_signature"], '2', [], False),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', [], False),
# This is expected to fail because ignored does not increment successful signatures.
# 2 signatures are required, but only 1 is successful.
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', ["BADARMOR"], False),
(["good_signature", "good_signature"], '2', [], True),
]
)
def test_verify_file_signatures(signatures, required_successful_count, ignore_errors, expected_success):
# type: (List[bool], int, bool, bool) -> None
def gpg_error_generator(results):
for result in results:
if isinstance(result, collection.gpg.GpgBaseError):
yield result
fqcn = 'ns.coll'
manifest_file = 'MANIFEST.json'
keyring = '~/.ansible/pubring.kbx'
with patch.object(collection, 'run_gpg_verify', MagicMock(return_value=("somestdout", 0,))):
with patch.object(collection, 'parse_gpg_errors', MagicMock(return_value=gpg_error_generator(signatures))):
assert collection.verify_file_signatures(
fqcn,
manifest_file,
signatures,
keyring,
required_successful_count,
ignore_errors
) == expected_success
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,604 |
[1] Update collections dev guide for testing
|
##### SUMMARY
Now that we have support for collections in ansible-test, the collections dev guide needs to be updated to reflect how testing works.
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
docs
|
https://github.com/ansible/ansible/issues/59604
|
https://github.com/ansible/ansible/pull/77518
|
727387ad7619fd9108ad250fd5d4ac75608dfe6d
|
fa7c5d9b104fdf82eecd1d3089c6d42256b789c7
| 2019-07-25T17:21:59Z |
python
| 2022-04-13T13:47:59Z |
docs/docsite/rst/community/reporting_bugs_and_features.rst
|
.. _reporting_bugs_and_features:
**************************************
Reporting bugs and requesting features
**************************************
.. contents::
:local:
.. _reporting_bugs:
Reporting a bug
===============
Security bugs
-------------
Ansible practices responsible disclosure - for security-related bugs, email `[email protected] <mailto:[email protected]>`_ to receive a prompt response. Do not submit a ticket or post to any public groups.
Bugs in ansible-core
--------------------
Before reporting a bug, search in GitHub for `already reported issues <https://github.com/ansible/ansible/issues>`_ and `open pull requests <https://github.com/ansible/ansible/pulls>`_ to see if someone has already addressed your issue. Unsure if you found a bug? Report the behavior on the :ref:`mailing list or community chat first <communication>`.
Also, use the mailing list or chat to discuss whether the problem is in ``ansible-core`` or a collection, and for "how do I do this" type questions.
You need a free GitHub account to `report bugs <https://github.com/ansible/ansible/issues>`_ that affect:
- multiple plugins
- a plugin that remained in the ansible/ansible repo
- the overall functioning of Ansible
How to write a good bug report
------------------------------
If you find a bug, open an issue using the `issue template <https://github.com/ansible/ansible/issues/new?assignees=&labels=&template=bug_report.yml>`_.
Fill out the issue template as completely and as accurately as possible. Include:
* your Ansible version
* the expected behavior and what you've tried (including the exact commands you were using or tasks you are running)
* the current behavior and why you think it is a bug
* the steps to reproduce the bug
* a minimal reproducible example and comments describing examples
* any relevant configurations and the components you used
* any relevant output plus ``ansible -vvvv`` (debugging) output
Make sure to preserve formatting using `code blocks <https://help.github.com/articles/creating-and-highlighting-code-blocks/>`_ when sharing YAML in playbooks. For multiple-file content, use gist.github.com, which is more durable than pastebin content.
.. _request_features:
Requesting a feature
====================
Before you request a feature, check what is :ref:`planned for future Ansible Releases <roadmaps>`. Check `existing pull requests tagged with feature <https://github.com/ansible/ansible/issues?q=is%3Aissue+is%3Aopen+label%3Afeature>`_.
To get your feature into Ansible, :ref:`submit a pull request <community_pull_requests>`, either against ansible-core or a collection. See also :ref:`ansible_collection_merge_requirements`. For ``ansible-core``, you can also open an issue in `ansible/ansible <https://github.com/ansible/ansible/issues>`_ or in a corresponding collection repository (To find the correct issue tracker, refer to :ref:`Bugs in collections<reporting_bugs_in_collections>` ).
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,510 |
Applying limit switch with an empty inventory group causes error not warning?
|
### Summary
Hi,
Firstly - it's probably rather strong to call this a bug - it's an inconsistency IMHO.
I have a situation where from time to time my `[production_targets]` group in my inventory in empty - for example:
```
[integration_targets]
dev-01
qa-01
[production_targets]
[targets:children]
integration_targets
production_targets
[targets:vars]
ansible_user=apprunner
```
My expectation was an empty group would result in Ansible doing nothing and returning success - after all, nothing has failed here. This happens when the limit switch is not applied.
However, at least the way I've described my inventory above seems to lead to an error, specifically when I **limit** to an empty group:
```
ansible-playbook /path/to/deploy.yml -i /path/to/inventory.ini -l production_targets
```
I get an error not a warning:
**ERROR! Specified hosts and/or --limit does not match any hosts**
However it seems if I don't use the limit switch this isn't an error, just a warning:
**[WARNING]: No hosts matched, nothing to do**
I understand the logic in throwing an error (although I think a consistent warning would be less surprising, with and without use of limit) so if my approach is wrong, can you please advise how I can configure ansible to accept a lack of matches in the inventory when also using the limit switch, and move on without throwing an error?
Thanks!
### Issue Type
Bug Report
### Component Name
inventory
### Ansible Version
```console
ansible [core 2.11.9]
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
Make yourself an inventory file: callend inv.ini:
```
[integration_targets]
dev-01
qa-01
[production_targets]
[targets:children]
integration_targets
production_targets
[targets:vars]
ansible_user=apprunner
```
Then you can demonstrate the problem just using command line ansible
```
$ ansible targets -i inv.ini -a "ls" -l production_targets
ERROR! Specified hosts and/or --limit does not match any hosts
$ ansible production_targets -i inv.ini -a "ls"
[WARNING]: No hosts matched, nothing to do
```
### Expected Results
I'd expect no matches to give the same consistent error message, whether I apply a limit or not:
```
$ ansible targets -i inv.ini -a "ls" -l production_targets
[WARNING]: No hosts matched, nothing to do
$ ansible production_targets -i inv.ini -a "ls"
[WARNING]: No hosts matched, nothing to do
```
### Actual Results
```console
The first command given an ERROR not a WARNING which seems inconsistent to me:
$ ansible targets -i inv.ini -a "ls" -l production_targets
ERROR! Specified hosts and/or --limit does not match any hosts
$ ansible production_targets -i inv.ini -a "ls"
[WARNING]: No hosts matched, nothing to do
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77510
|
https://github.com/ansible/ansible/pull/77517
|
6e5f1d781ddfa34460af9c3dbb944f29e8ec743d
|
793bb200ecc84022c2778182fdd52c5fbacd8e08
| 2022-04-11T16:11:23Z |
python
| 2022-04-14T17:06:34Z |
changelogs/fragments/better_nohosts_error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,510 |
Applying limit switch with an empty inventory group causes error not warning?
|
### Summary
Hi,
Firstly - it's probably rather strong to call this a bug - it's an inconsistency IMHO.
I have a situation where from time to time my `[production_targets]` group in my inventory in empty - for example:
```
[integration_targets]
dev-01
qa-01
[production_targets]
[targets:children]
integration_targets
production_targets
[targets:vars]
ansible_user=apprunner
```
My expectation was an empty group would result in Ansible doing nothing and returning success - after all, nothing has failed here. This happens when the limit switch is not applied.
However, at least the way I've described my inventory above seems to lead to an error, specifically when I **limit** to an empty group:
```
ansible-playbook /path/to/deploy.yml -i /path/to/inventory.ini -l production_targets
```
I get an error not a warning:
**ERROR! Specified hosts and/or --limit does not match any hosts**
However it seems if I don't use the limit switch this isn't an error, just a warning:
**[WARNING]: No hosts matched, nothing to do**
I understand the logic in throwing an error (although I think a consistent warning would be less surprising, with and without use of limit) so if my approach is wrong, can you please advise how I can configure ansible to accept a lack of matches in the inventory when also using the limit switch, and move on without throwing an error?
Thanks!
### Issue Type
Bug Report
### Component Name
inventory
### Ansible Version
```console
ansible [core 2.11.9]
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
Make yourself an inventory file: callend inv.ini:
```
[integration_targets]
dev-01
qa-01
[production_targets]
[targets:children]
integration_targets
production_targets
[targets:vars]
ansible_user=apprunner
```
Then you can demonstrate the problem just using command line ansible
```
$ ansible targets -i inv.ini -a "ls" -l production_targets
ERROR! Specified hosts and/or --limit does not match any hosts
$ ansible production_targets -i inv.ini -a "ls"
[WARNING]: No hosts matched, nothing to do
```
### Expected Results
I'd expect no matches to give the same consistent error message, whether I apply a limit or not:
```
$ ansible targets -i inv.ini -a "ls" -l production_targets
[WARNING]: No hosts matched, nothing to do
$ ansible production_targets -i inv.ini -a "ls"
[WARNING]: No hosts matched, nothing to do
```
### Actual Results
```console
The first command given an ERROR not a WARNING which seems inconsistent to me:
$ ansible targets -i inv.ini -a "ls" -l production_targets
ERROR! Specified hosts and/or --limit does not match any hosts
$ ansible production_targets -i inv.ini -a "ls"
[WARNING]: No hosts matched, nothing to do
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77510
|
https://github.com/ansible/ansible/pull/77517
|
6e5f1d781ddfa34460af9c3dbb944f29e8ec743d
|
793bb200ecc84022c2778182fdd52c5fbacd8e08
| 2022-04-11T16:11:23Z |
python
| 2022-04-14T17:06:34Z |
lib/ansible/cli/__init__.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
if sys.version_info < (3, 8):
raise SystemExit(
'ERROR: Ansible requires Python 3.8 or newer on the controller. '
'Current version: %s' % ''.join(sys.version.splitlines())
)
from importlib.metadata import version
from ansible.module_utils.compat.version import LooseVersion
# Used for determining if the system is running a new enough Jinja2 version
# and should only restrict on our documented minimum versions
jinja2_version = version('jinja2')
if jinja2_version < LooseVersion('3.0'):
raise SystemExit(
'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. '
'Current version: %s' % jinja2_version
)
import errno
import getpass
import os
import subprocess
import traceback
from abc import ABC, abstractmethod
from pathlib import Path
try:
from ansible import constants as C
from ansible.utils.display import Display, initialize_locale
initialize_locale()
display = Display()
except Exception as e:
print('ERROR: %s' % e, file=sys.stderr)
sys.exit(5)
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common.file import is_executable
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.release import __version__
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.path import unfrackpath
from ansible.utils.unsafe_proxy import to_unsafe_text
from ansible.vars.manager import VariableManager
try:
import argcomplete
HAS_ARGCOMPLETE = True
except ImportError:
HAS_ARGCOMPLETE = False
class CLI(ABC):
''' code behind bin/ansible* programs '''
PAGER = 'less'
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
LESS_OPTS = 'FRSX'
SKIP_INVENTORY_DEFAULTS = False
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
if not args:
raise ValueError('A non-empty list for args is required')
self.args = args
self.parser = None
self.callback = callback
if C.DEVEL_WARNING and __version__.endswith('dev0'):
display.warning(
'You are running the development version of Ansible. You should only run Ansible from "devel" if '
'you are modifying the Ansible engine, or trying out features under development. This is a rapidly '
'changing source of code and can become unstable at any point.'
)
@abstractmethod
def run(self):
"""Run the ansible command
Subclasses must implement this method. It does the actual work of
running an Ansible command.
"""
self.parse()
display.vv(to_text(opt_help.version(self.parser.prog)))
if C.CONFIG_FILE:
display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE))
else:
display.v(u"No config file found; using defaults")
# warn about deprecated config options
for deprecated in C.config.DEPRECATED:
name = deprecated[0]
why = deprecated[1]['why']
if 'alternatives' in deprecated[1]:
alt = ', use %s instead' % deprecated[1]['alternatives']
else:
alt = ''
ver = deprecated[1].get('version')
date = deprecated[1].get('date')
collection_name = deprecated[1].get('collection_name')
display.deprecated("%s option, %s%s" % (name, why, alt),
version=ver, date=date, collection_name=collection_name)
@staticmethod
def split_vault_id(vault_id):
# return (before_@, after_@)
# if no @, return whole string as after_
if '@' not in vault_id:
return (None, vault_id)
parts = vault_id.split('@', 1)
ret = tuple(parts)
return ret
@staticmethod
def build_vault_ids(vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=None,
auto_prompt=True):
vault_password_files = vault_password_files or []
vault_ids = vault_ids or []
# convert vault_password_files into vault_ids slugs
for password_file in vault_password_files:
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file)
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
vault_ids.append(id_slug)
# if an action needs an encrypt password (create_new_password=True) and we dont
# have other secrets setup, then automatically add a password prompt as well.
# prompts cant/shouldnt work without a tty, so dont add prompt secrets
if ask_vault_pass or (not vault_ids and auto_prompt):
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass')
vault_ids.append(id_slug)
return vault_ids
# TODO: remove the now unused args
@staticmethod
def setup_vault_secrets(loader, vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=False,
auto_prompt=True):
# list of tuples
vault_secrets = []
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
prompt_formats = {}
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
vault_password_files = vault_password_files or []
if C.DEFAULT_VAULT_PASSWORD_FILE:
vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE)
if create_new_password:
prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ',
'Confirm new vault password (%(vault_id)s): ']
# 2.3 format prompts for --ask-vault-pass
prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ',
'Confirm New Vault password: ']
else:
prompt_formats['prompt'] = ['Vault password (%(vault_id)s): ']
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
prompt_formats['prompt_ask_vault_pass'] = ['Vault password: ']
vault_ids = CLI.build_vault_ids(vault_ids,
vault_password_files,
ask_vault_pass,
create_new_password,
auto_prompt=auto_prompt)
last_exception = found_vault_secret = None
for vault_id_slug in vault_ids:
vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug)
if vault_id_value in ['prompt', 'prompt_ask_vault_pass']:
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value],
vault_id=built_vault_id)
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
try:
prompted_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc))
raise
found_vault_secret = True
vault_secrets.append((built_vault_id, prompted_vault_secret))
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
loader.set_vault_secrets(vault_secrets)
continue
# assuming anything else is a password file
display.vvvvv('Reading vault password file: %s' % vault_id_value)
# read vault_pass from a file
try:
file_vault_secret = get_file_vault_secret(filename=vault_id_value,
vault_id=vault_id_name,
loader=loader)
except AnsibleError as exc:
display.warning('Error getting vault password file (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
try:
file_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
found_vault_secret = True
if vault_id_name:
vault_secrets.append((vault_id_name, file_vault_secret))
else:
vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret))
# update loader with as-yet-known vault secrets
loader.set_vault_secrets(vault_secrets)
# An invalid or missing password file will error globally
# if no valid vault secret was found.
if last_exception and not found_vault_secret:
raise last_exception
return vault_secrets
@staticmethod
def _get_secret(prompt):
secret = getpass.getpass(prompt=prompt)
if secret:
secret = to_unsafe_text(secret)
return secret
@staticmethod
def ask_passwords():
''' prompt for connection and become passwords if needed '''
op = context.CLIARGS
sshpass = None
becomepass = None
become_prompt = ''
become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper()
try:
become_prompt = "%s password: " % become_prompt_method
if op['ask_pass']:
sshpass = CLI._get_secret("SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method
elif op['connection_password_file']:
sshpass = CLI.get_password_from_file(op['connection_password_file'])
if op['become_ask_pass']:
becomepass = CLI._get_secret(become_prompt)
if op['ask_pass'] and becomepass == '':
becomepass = sshpass
elif op['become_password_file']:
becomepass = CLI.get_password_from_file(op['become_password_file'])
except EOFError:
pass
return (sshpass, becomepass)
def validate_conflicts(self, op, runas_opts=False, fork_opts=False):
''' check for conflicting options '''
if fork_opts:
if op.forks < 1:
self.parser.error("The number of processes (--forks) must be >= 1")
return op
@abstractmethod
def init_parser(self, usage="", desc=None, epilog=None):
"""
Create an options parser for most ansible scripts
Subclasses need to implement this method. They will usually call the base class's
init_parser to create a basic version and then add their own options on top of that.
An implementation will look something like this::
def init_parser(self):
super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True)
ansible.arguments.option_helpers.add_runas_options(self.parser)
self.parser.add_option('--my-option', dest='my_option', action='store')
"""
self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog)
@abstractmethod
def post_process_args(self, options):
"""Process the command line args
Subclasses need to implement this method. This method validates and transforms the command
line arguments. It can be used to check whether conflicting values were given, whether filenames
exist, etc.
An implementation will look something like this::
def post_process_args(self, options):
options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types):
options.listofhosts = string_types.split(',')
return options
"""
# process tags
if hasattr(options, 'tags') and not options.tags:
# optparse defaults does not do what's expected
# More specifically, we want `--tags` to be additive. So we cannot
# simply change C.TAGS_RUN's default to ["all"] because then passing
# --tags foo would cause us to have ['all', 'foo']
options.tags = ['all']
if hasattr(options, 'tags') and options.tags:
tags = set()
for tag_set in options.tags:
for tag in tag_set.split(u','):
tags.add(tag.strip())
options.tags = list(tags)
# process skip_tags
if hasattr(options, 'skip_tags') and options.skip_tags:
skip_tags = set()
for tag_set in options.skip_tags:
for tag in tag_set.split(u','):
skip_tags.add(tag.strip())
options.skip_tags = list(skip_tags)
# process inventory options except for CLIs that require their own processing
if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS:
if options.inventory:
# should always be list
if isinstance(options.inventory, string_types):
options.inventory = [options.inventory]
# Ensure full paths when needed
options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory]
else:
options.inventory = C.DEFAULT_HOST_LIST
return options
def parse(self):
"""Parse the command line args
This method parses the command line arguments. It uses the parser
stored in the self.parser attribute and saves the args and options in
context.CLIARGS.
Subclasses need to implement two helper methods, init_parser() and post_process_args() which
are called from this function before and after parsing the arguments.
"""
self.init_parser()
if HAS_ARGCOMPLETE:
argcomplete.autocomplete(self.parser)
try:
options = self.parser.parse_args(self.args[1:])
except SystemExit as e:
if(e.code != 0):
self.parser.exit(status=2, message=" \n%s" % self.parser.format_help())
raise
options = self.post_process_args(options)
context._init_global_context(options)
@staticmethod
def version_info(gitinfo=False):
''' return full ansible version info '''
if gitinfo:
# expensive call, user with care
ansible_version_string = opt_help.version()
else:
ansible_version_string = __version__
ansible_version = ansible_version_string.split()[0]
ansible_versions = ansible_version.split('.')
for counter in range(len(ansible_versions)):
if ansible_versions[counter] == "":
ansible_versions[counter] = 0
try:
ansible_versions[counter] = int(ansible_versions[counter])
except Exception:
pass
if len(ansible_versions) < 3:
for counter in range(len(ansible_versions), 3):
ansible_versions.append(0)
return {'string': ansible_version_string.strip(),
'full': ansible_version,
'major': ansible_versions[0],
'minor': ansible_versions[1],
'revision': ansible_versions[2]}
@staticmethod
def pager(text):
''' find reasonable way to display text '''
# this is a much simpler form of what is in pydoc.py
if not sys.stdout.isatty():
display.display(text, screen_only=True)
elif 'PAGER' in os.environ:
if sys.platform == 'win32':
display.display(text, screen_only=True)
else:
CLI.pager_pipe(text, os.environ['PAGER'])
else:
p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode == 0:
CLI.pager_pipe(text, 'less')
else:
display.display(text, screen_only=True)
@staticmethod
def pager_pipe(text, cmd):
''' pipe text through a pager '''
if 'LESS' not in os.environ:
os.environ['LESS'] = CLI.LESS_OPTS
try:
cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout)
cmd.communicate(input=to_bytes(text))
except IOError:
pass
except KeyboardInterrupt:
pass
@staticmethod
def _play_prereqs():
options = context.CLIARGS
# all needs loader
loader = DataLoader()
basedir = options.get('basedir', False)
if basedir:
loader.set_basedir(basedir)
add_all_plugin_dirs(basedir)
AnsibleCollectionConfig.playbook_paths = basedir
default_collection = _get_collection_name_from_path(basedir)
if default_collection:
display.warning(u'running with default collection {0}'.format(default_collection))
AnsibleCollectionConfig.default_collection = default_collection
vault_ids = list(options['vault_ids'])
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
vault_secrets = CLI.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(options['vault_password_files']),
ask_vault_pass=options['ask_vault_pass'],
auto_prompt=False)
loader.set_vault_secrets(vault_secrets)
# create the inventory, and filter it based on the subset specified (if any)
inventory = InventoryManager(loader=loader, sources=options['inventory'], cache=(not options.get('flush_cache')))
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
return loader, inventory, variable_manager
@staticmethod
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified hosts and/or --limit does not match any hosts")
return hosts
@staticmethod
def get_password_from_file(pwd_file):
b_pwd_file = to_bytes(pwd_file)
secret = None
if b_pwd_file == b'-':
# ensure its read as bytes
secret = sys.stdin.buffer.read()
elif not os.path.exists(b_pwd_file):
raise AnsibleError("The password file %s was not found" % pwd_file)
elif is_executable(b_pwd_file):
display.vvvv(u'The password file %s is a script.' % to_text(pwd_file))
cmd = [b_pwd_file]
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("Problem occured when trying to run the password script %s (%s)."
" If this is not a script, remove the executable bit from the file." % (pwd_file, e))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr))
secret = stdout
else:
try:
f = open(b_pwd_file, "rb")
secret = f.read().strip()
f.close()
except (OSError, IOError) as e:
raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e))
secret = secret.strip(b'\r\n')
if not secret:
raise AnsibleError('Empty password was provided from file (%s)' % pwd_file)
return to_unsafe_text(secret)
@classmethod
def cli_executor(cls, args=None):
if args is None:
args = sys.argv
try:
display.debug("starting run")
ansible_dir = Path(C.ANSIBLE_HOME).expanduser()
try:
ansible_dir.mkdir(mode=0o700)
except OSError as exc:
if exc.errno != errno.EEXIST:
display.warning(
"Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace'))
)
else:
display.debug("Created the '%s' directory" % ansible_dir)
try:
args = [to_text(a, errors='surrogate_or_strict') for a in args]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = cls(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,519 |
Numerical Group name throws error in yaml formatted hosts file.
|
### Summary
When attempting to run a simple playbook to get the hostname of a cisco 3750x ansible fails to parse the yaml formatted hosts file. Playbook works with an INI formatted hosts file, however in an effort to use ansible-vault I need to migrate my file to yaml format.
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/arcreigh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/arcreigh/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
N/A
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
#/etc/ansible/hosts
---
all:
children:
Network:
children:
IOS:
hosts:
3750:
ansible_host: 3750.arc.net
vars:
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: cisco.ios.ios
ansible_user: redacted
ansible_password: redacted
#/home/ansible/playbooks/cisco/get_hostname.yml
---
- name: Get Cisco Device Hostname and Version
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
tasks:
- name: Get config for IOS device
cisco.ios.ios_facts:
gather_subset: all
- name: Display config
debug:
msg: "The hostname is {{ ansible_net_hostname }} and the OS version is {{ ansible_net_version }}"
```
### Expected Results
Expected output would be
The hostname is 3750.arc.net and the OS version is (insert ios version here)
### Actual Results
```console
ansible-playbook -i /etc/ansible/hosts get_hostname.yml
[WARNING]: * Failed to parse /etc/ansible/hosts with yaml plugin: argument of type 'int' is not iterable
[WARNING]: * Failed to parse /etc/ansible/hosts with ini plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is a YAML file.
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Get Cisco Device Hostname and Version] *************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP ***********************************************************************************************************************************************************************************
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77519
|
https://github.com/ansible/ansible/pull/77544
|
abdd237de70b4fab83f11df65262590a5fd93a3d
|
9280396e1991f2365c1df79cff83f323b0908c7a
| 2022-04-12T17:06:13Z |
python
| 2022-04-19T15:06:59Z |
lib/ansible/plugins/inventory/yaml.py
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: yaml
version_added: "2.4"
short_description: Uses a specific YAML file as an inventory source.
description:
- "YAML-based inventory, should start with the C(all) group and contain hosts/vars/children entries."
- Host entries can have sub-entries defined, which will be treated as variables.
- Vars entries are normal group vars.
- "Children are 'child groups', which can also have their own vars/hosts/children and so on."
- File MUST have a valid extension, defined in configuration.
notes:
- If you want to set vars for the C(all) group inside the inventory file, the C(all) group must be the first entry in the file.
- Enabled in configuration by default.
options:
yaml_extensions:
description: list of 'valid' extensions for files containing YAML
type: list
elements: string
default: ['.yaml', '.yml', '.json']
env:
- name: ANSIBLE_YAML_FILENAME_EXT
- name: ANSIBLE_INVENTORY_PLUGIN_EXTS
ini:
- key: yaml_valid_extensions
section: defaults
- section: inventory_plugin_yaml
key: yaml_valid_extensions
'''
EXAMPLES = '''
all: # keys must be unique, i.e. only one 'hosts' per group
hosts:
test1:
test2:
host_var: value
vars:
group_all_var: value
children: # key order does not matter, indentation does
other_group:
children:
group_x:
hosts:
test5 # Note that one machine will work without a colon
#group_x:
# hosts:
# test5 # But this won't
# test7 #
group_y:
hosts:
test6: # So always use a colon
vars:
g2_var2: value3
hosts:
test4:
ansible_host: 127.0.0.1
last_group:
hosts:
test1 # same host as above, additional group membership
vars:
group_last_var: value
'''
import os
from collections.abc import MutableMapping
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_native, to_text
from ansible.plugins.inventory import BaseFileInventoryPlugin
NoneType = type(None)
class InventoryModule(BaseFileInventoryPlugin):
NAME = 'yaml'
def __init__(self):
super(InventoryModule, self).__init__()
def verify_file(self, path):
valid = False
if super(InventoryModule, self).verify_file(path):
file_name, ext = os.path.splitext(path)
if not ext or ext in self.get_option('yaml_extensions'):
valid = True
return valid
def parse(self, inventory, loader, path, cache=True):
''' parses the inventory file '''
super(InventoryModule, self).parse(inventory, loader, path)
self.set_options()
try:
data = self.loader.load_from_file(path, cache=False)
except Exception as e:
raise AnsibleParserError(e)
if not data:
raise AnsibleParserError('Parsed empty YAML file')
elif not isinstance(data, MutableMapping):
raise AnsibleParserError('YAML inventory has invalid structure, it should be a dictionary, got: %s' % type(data))
elif data.get('plugin'):
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
# We expect top level keys to correspond to groups, iterate over them
# to get host, vars and subgroups (which we iterate over recursivelly)
if isinstance(data, MutableMapping):
for group_name in data:
self._parse_group(group_name, data[group_name])
else:
raise AnsibleParserError("Invalid data from file, expected dictionary and got:\n\n%s" % to_native(data))
def _parse_group(self, group, group_data):
if isinstance(group_data, (MutableMapping, NoneType)): # type: ignore[misc]
try:
group = self.inventory.add_group(group)
except AnsibleError as e:
raise AnsibleParserError("Unable to add group %s: %s" % (group, to_text(e)))
if group_data is not None:
# make sure they are dicts
for section in ['vars', 'children', 'hosts']:
if section in group_data:
# convert strings to dicts as these are allowed
if isinstance(group_data[section], string_types):
group_data[section] = {group_data[section]: None}
if not isinstance(group_data[section], (MutableMapping, NoneType)): # type: ignore[misc]
raise AnsibleParserError('Invalid "%s" entry for "%s" group, requires a dictionary, found "%s" instead.' %
(section, group, type(group_data[section])))
for key in group_data:
if not isinstance(group_data[key], (MutableMapping, NoneType)): # type: ignore[misc]
self.display.warning('Skipping key (%s) in group (%s) as it is not a mapping, it is a %s' % (key, group, type(group_data[key])))
continue
if isinstance(group_data[key], NoneType): # type: ignore[misc]
self.display.vvv('Skipping empty key (%s) in group (%s)' % (key, group))
elif key == 'vars':
for var in group_data[key]:
self.inventory.set_variable(group, var, group_data[key][var])
elif key == 'children':
for subgroup in group_data[key]:
subgroup = self._parse_group(subgroup, group_data[key][subgroup])
self.inventory.add_child(group, subgroup)
elif key == 'hosts':
for host_pattern in group_data[key]:
hosts, port = self._parse_host(host_pattern)
self._populate_host_vars(hosts, group_data[key][host_pattern] or {}, group, port)
else:
self.display.warning('Skipping unexpected key (%s) in group (%s), only "vars", "children" and "hosts" are valid' % (key, group))
else:
self.display.warning("Skipping '%s' as this is not a valid group definition" % group)
return group
def _parse_host(self, host_pattern):
'''
Each host key can be a pattern, try to process it and add variables as needed
'''
(hostnames, port) = self._expand_hostpattern(host_pattern)
return hostnames, port
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,560 |
ansible-galaxy fail if dependencies is none
|
### Summary
If `dependencies` in `galaxy.yml` is defined but not assigned any value, the installation would fail.
Please note the `dependencies`.
The installation would fail.
```yml
namespace: my_namespace
name: my_collection
version: 1.0.0
readme: README.md
authors:
- your name <[email protected]>
description: your collection description
license:
- GPL-2.0-or-later
license_file: ''
tags: []
dependencies:
repository: http://example.com/repository
documentation: http://docs.example.com
homepage: http://example.com
issues: http://example.com/issue/tracker
build_ignore: []
```
It works if `dependencies` is assigned to `{}`.
```yml
namespace: my_namespace
name: my_collection
version: 1.0.0
readme: README.md
authors:
- your name <[email protected]>
description: your collection description
license:
- GPL-2.0-or-later
license_file: ''
tags: []
dependencies: {}
repository: http://example.com/repository
documentation: http://docs.example.com
homepage: http://example.com
issues: http://example.com/issue/tracker
build_ignore: []
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become
unstable at any point.
ansible [core 2.14.0.dev0] (devel abdd237de7) last updated 2022/04/19 11:12:55 (GMT +800)
config file = None
configured module search path = ['/Users/jackzhang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jackzhang/src/github/ansible/lib/ansible
ansible collection location = /Users/jackzhang/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jackzhang/.pyenv/versions/v-397/bin/ansible
python version = 3.9.7 (default, Oct 28 2021, 19:01:53) [Clang 12.0.5 (clang-1205.0.22.11)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CONFIG_FILE() = None
```
### OS / Environment
MacOS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```shell
$ ansible-galaxy collection init my_namespace.my_collection
$ cd my_namespace.my_collection
$ gsed -ri 's/^(dependencies:)(.*)$/\1/' galaxy.yml
$ ansible-galaxy collection install -vvv .
```
### Expected Results
I expect the `my_namespace.my_collection` with `dependencies=None` can be installed successfully
### Actual Results
```console
ansible-galaxy collection install -vvvv .
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become
unstable at any point.
ansible-galaxy [core 2.14.0.dev0] (devel abdd237de7) last updated 2022/04/19 11:12:55 (GMT +800)
config file = None
configured module search path = ['/Users/jackzhang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jackzhang/src/github/ansible/lib/ansible
ansible collection location = /Users/jackzhang/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jackzhang/.pyenv/versions/v-397/bin/ansible-galaxy
python version = 3.9.7 (default, Oct 28 2021, 19:01:53) [Clang 12.0.5 (clang-1205.0.22.11)]
jinja version = 3.1.1
libyaml = True
No config file found; using defaults
Starting galaxy collection install process
Found installed collection geerlingguy.testing:1.0.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/geerlingguy/testing'
Found installed collection lablabs.wireguard:0.1.1 at '/Users/jackzhang/.ansible/collections/ansible_collections/lablabs/wireguard'
[WARNING]: Collection at '/Users/jackzhang/.ansible/collections/ansible_collections/foo/bar' does not have a MANIFEST.json
file, nor has it galaxy.yml: cannot detect version.
Found installed collection foo.bar:* at '/Users/jackzhang/.ansible/collections/ansible_collections/foo/bar'
Found installed collection robertdebock.development_environment:2.1.1 at '/Users/jackzhang/.ansible/collections/ansible_collections/robertdebock/development_environment'
Found installed collection containers.podman:1.9.3 at '/Users/jackzhang/.ansible/collections/ansible_collections/containers/podman'
Found installed collection community.docker:2.3.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/docker'
Found installed collection community.general:4.7.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/general'
Found installed collection community.molecule:0.1.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/molecule'
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'items'
the full traceback was:
Traceback (most recent call last):
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 646, in run
return context.CLIARGS['func']()
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 1300, in execute_install
self._execute_install_collection(
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 1328, in _execute_install_collection
install_collections(
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/collection/__init__.py", line 683, in install_collections
dependency_map = _resolve_depenency_map(
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/collection/__init__.py", line 1594, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 207, in _attempt_to_pin_criterion
criteria = self._get_criteria_to_update(candidate)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 198, in _get_criteria_to_update
for r in self._p.get_dependencies(candidate):
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/dependency_resolution/providers.py", line 425, in get_dependencies
for dep_name, dep_req in req_map.items()
AttributeError: 'NoneType' object has no attribute 'items'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77560
|
https://github.com/ansible/ansible/pull/77561
|
4faa576ee9c0892668cd150307eae600f7cc4068
|
4d69c09695c8f78b95edf51314999be3c19b62eb
| 2022-04-19T07:59:36Z |
python
| 2022-04-19T19:54:45Z |
changelogs/fragments/77561-ansible-galaxy-coll-install-null-dependencies.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,560 |
ansible-galaxy fail if dependencies is none
|
### Summary
If `dependencies` in `galaxy.yml` is defined but not assigned any value, the installation would fail.
Please note the `dependencies`.
The installation would fail.
```yml
namespace: my_namespace
name: my_collection
version: 1.0.0
readme: README.md
authors:
- your name <[email protected]>
description: your collection description
license:
- GPL-2.0-or-later
license_file: ''
tags: []
dependencies:
repository: http://example.com/repository
documentation: http://docs.example.com
homepage: http://example.com
issues: http://example.com/issue/tracker
build_ignore: []
```
It works if `dependencies` is assigned to `{}`.
```yml
namespace: my_namespace
name: my_collection
version: 1.0.0
readme: README.md
authors:
- your name <[email protected]>
description: your collection description
license:
- GPL-2.0-or-later
license_file: ''
tags: []
dependencies: {}
repository: http://example.com/repository
documentation: http://docs.example.com
homepage: http://example.com
issues: http://example.com/issue/tracker
build_ignore: []
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become
unstable at any point.
ansible [core 2.14.0.dev0] (devel abdd237de7) last updated 2022/04/19 11:12:55 (GMT +800)
config file = None
configured module search path = ['/Users/jackzhang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jackzhang/src/github/ansible/lib/ansible
ansible collection location = /Users/jackzhang/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jackzhang/.pyenv/versions/v-397/bin/ansible
python version = 3.9.7 (default, Oct 28 2021, 19:01:53) [Clang 12.0.5 (clang-1205.0.22.11)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CONFIG_FILE() = None
```
### OS / Environment
MacOS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```shell
$ ansible-galaxy collection init my_namespace.my_collection
$ cd my_namespace.my_collection
$ gsed -ri 's/^(dependencies:)(.*)$/\1/' galaxy.yml
$ ansible-galaxy collection install -vvv .
```
### Expected Results
I expect the `my_namespace.my_collection` with `dependencies=None` can be installed successfully
### Actual Results
```console
ansible-galaxy collection install -vvvv .
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become
unstable at any point.
ansible-galaxy [core 2.14.0.dev0] (devel abdd237de7) last updated 2022/04/19 11:12:55 (GMT +800)
config file = None
configured module search path = ['/Users/jackzhang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jackzhang/src/github/ansible/lib/ansible
ansible collection location = /Users/jackzhang/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jackzhang/.pyenv/versions/v-397/bin/ansible-galaxy
python version = 3.9.7 (default, Oct 28 2021, 19:01:53) [Clang 12.0.5 (clang-1205.0.22.11)]
jinja version = 3.1.1
libyaml = True
No config file found; using defaults
Starting galaxy collection install process
Found installed collection geerlingguy.testing:1.0.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/geerlingguy/testing'
Found installed collection lablabs.wireguard:0.1.1 at '/Users/jackzhang/.ansible/collections/ansible_collections/lablabs/wireguard'
[WARNING]: Collection at '/Users/jackzhang/.ansible/collections/ansible_collections/foo/bar' does not have a MANIFEST.json
file, nor has it galaxy.yml: cannot detect version.
Found installed collection foo.bar:* at '/Users/jackzhang/.ansible/collections/ansible_collections/foo/bar'
Found installed collection robertdebock.development_environment:2.1.1 at '/Users/jackzhang/.ansible/collections/ansible_collections/robertdebock/development_environment'
Found installed collection containers.podman:1.9.3 at '/Users/jackzhang/.ansible/collections/ansible_collections/containers/podman'
Found installed collection community.docker:2.3.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/docker'
Found installed collection community.general:4.7.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/general'
Found installed collection community.molecule:0.1.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/molecule'
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'items'
the full traceback was:
Traceback (most recent call last):
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 646, in run
return context.CLIARGS['func']()
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 1300, in execute_install
self._execute_install_collection(
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 1328, in _execute_install_collection
install_collections(
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/collection/__init__.py", line 683, in install_collections
dependency_map = _resolve_depenency_map(
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/collection/__init__.py", line 1594, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 207, in _attempt_to_pin_criterion
criteria = self._get_criteria_to_update(candidate)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 198, in _get_criteria_to_update
for r in self._p.get_dependencies(candidate):
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/dependency_resolution/providers.py", line 425, in get_dependencies
for dep_name, dep_req in req_map.items()
AttributeError: 'NoneType' object has no attribute 'items'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77560
|
https://github.com/ansible/ansible/pull/77561
|
4faa576ee9c0892668cd150307eae600f7cc4068
|
4d69c09695c8f78b95edf51314999be3c19b62eb
| 2022-04-19T07:59:36Z |
python
| 2022-04-19T19:54:45Z |
lib/ansible/galaxy/collection/concrete_artifact_manager.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Concrete collection candidate management helper module."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import subprocess
import typing as t
from contextlib import contextmanager
from hashlib import sha256
from urllib.error import URLError
from urllib.parse import urldefrag
from shutil import rmtree
from tempfile import mkdtemp
if t.TYPE_CHECKING:
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement,
)
from ansible.galaxy.token import GalaxyToken
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.dependency_resolution.dataclasses import _GALAXY_YAML
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.six import raise_from
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
import yaml
display = Display()
MANIFEST_FILENAME = 'MANIFEST.json'
class ConcreteArtifactsManager:
"""Manager for on-disk collection artifacts.
It is responsible for:
* downloading remote collections from Galaxy-compatible servers and
direct links to tarballs or SCM repositories
* keeping track of local ones
* keeping track of Galaxy API tokens for downloads from Galaxy'ish
as well as the artifact hashes
* keeping track of Galaxy API signatures for downloads from Galaxy'ish
* caching all of above
* retrieving the metadata out of the downloaded artifacts
"""
def __init__(self, b_working_directory, validate_certs=True, keyring=None, timeout=60, required_signature_count=None, ignore_signature_errors=None):
# type: (bytes, bool, str, int, str, list[str]) -> None
"""Initialize ConcreteArtifactsManager caches and costraints."""
self._validate_certs = validate_certs # type: bool
self._artifact_cache = {} # type: dict[bytes, bytes]
self._galaxy_artifact_cache = {} # type: dict[Candidate | Requirement, bytes]
self._artifact_meta_cache = {} # type: dict[bytes, dict[str, str | list[str] | dict[str, str] | None]]
self._galaxy_collection_cache = {} # type: dict[Candidate | Requirement, tuple[str, str, GalaxyToken]]
self._galaxy_collection_origin_cache = {} # type: dict[Candidate, tuple[str, list[dict[str, str]]]]
self._b_working_directory = b_working_directory # type: bytes
self._supplemental_signature_cache = {} # type: dict[str, str]
self._keyring = keyring # type: str
self.timeout = timeout # type: int
self._required_signature_count = required_signature_count # type: str
self._ignore_signature_errors = ignore_signature_errors # type: list[str]
@property
def keyring(self):
return self._keyring
@property
def required_successful_signature_count(self):
return self._required_signature_count
@property
def ignore_signature_errors(self):
if self._ignore_signature_errors is None:
return []
return self._ignore_signature_errors
def get_galaxy_artifact_source_info(self, collection):
# type: (Candidate) -> dict[str, str | list[dict[str, str]]]
server = collection.src.api_server
try:
download_url = self._galaxy_collection_cache[collection][0]
signatures_url, signatures = self._galaxy_collection_origin_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
return {
"format_version": "1.0.0",
"namespace": collection.namespace,
"name": collection.name,
"version": collection.ver,
"server": server,
"version_url": signatures_url,
"download_url": download_url,
"signatures": signatures,
}
def get_galaxy_artifact_path(self, collection):
# type: (Candidate | Requirement) -> bytes
"""Given a Galaxy-stored collection, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._galaxy_artifact_cache[collection]
except KeyError:
pass
try:
url, sha256_hash, token = self._galaxy_collection_cache[collection]
except KeyError as key_err:
raise_from(
RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
),
key_err,
)
display.vvvv(
"Fetching a collection tarball for '{collection!s}' from "
'Ansible Galaxy'.format(collection=collection),
)
try:
b_artifact_path = _download_file(
url,
self._b_working_directory,
expected_hash=sha256_hash,
validate_certs=self._validate_certs,
token=token,
) # type: bytes
except URLError as err:
raise_from(
AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
),
err,
)
else:
display.vvv(
"Collection '{coll!s}' obtained from "
'server {server!s} {url!s}'.format(
coll=collection, server=collection.src or 'Galaxy',
url=collection.src.api_server if collection.src is not None
else '',
)
)
self._galaxy_artifact_cache[collection] = b_artifact_path
return b_artifact_path
def get_artifact_path(self, collection):
# type: (Candidate | Requirement) -> bytes
"""Given a concrete collection pointer, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._artifact_cache[collection.src]
except KeyError:
pass
# NOTE: SCM needs to be special-cased as it may contain either
# NOTE: one collection in its root, or a number of top-level
# NOTE: collection directories instead.
# NOTE: The idea is to store the SCM collection as unpacked
# NOTE: directory structure under the temporary location and use
# NOTE: a "virtual" collection that has pinned requirements on
# NOTE: the directories under that SCM checkout that correspond
# NOTE: to collections.
# NOTE: This brings us to the idea that we need two separate
# NOTE: virtual Requirement/Candidate types --
# NOTE: (single) dir + (multidir) subdirs
if collection.is_url:
display.vvvv(
"Collection requirement '{collection!s}' is a URL "
'to a tar artifact'.format(collection=collection.fqcn),
)
try:
b_artifact_path = _download_file(
collection.src,
self._b_working_directory,
expected_hash=None, # NOTE: URLs don't support checksums
validate_certs=self._validate_certs,
timeout=self.timeout
)
except URLError as err:
raise_from(
AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
),
err,
)
elif collection.is_scm:
b_artifact_path = _extract_collection_from_git(
collection.src,
collection.ver,
self._b_working_directory,
)
elif collection.is_file or collection.is_dir or collection.is_subdirs:
b_artifact_path = to_bytes(collection.src)
else:
# NOTE: This may happen `if collection.is_online_index_pointer`
raise RuntimeError(
'The artifact is of an unexpected type {art_type!s}'.
format(art_type=collection.type)
)
self._artifact_cache[collection.src] = b_artifact_path
return b_artifact_path
def _get_direct_collection_namespace(self, collection):
# type: (Candidate) -> str | None
return self.get_direct_collection_meta(collection)['namespace'] # type: ignore[return-value]
def _get_direct_collection_name(self, collection):
# type: (Candidate) -> str | None
return self.get_direct_collection_meta(collection)['name'] # type: ignore[return-value]
def get_direct_collection_fqcn(self, collection):
# type: (Candidate) -> str | None
"""Extract FQCN from the given on-disk collection artifact.
If the collection is virtual, ``None`` is returned instead
of a string.
"""
if collection.is_virtual:
# NOTE: should it be something like "<virtual>"?
return None
return '.'.join(( # type: ignore[type-var]
self._get_direct_collection_namespace(collection), # type: ignore[arg-type]
self._get_direct_collection_name(collection),
))
def get_direct_collection_version(self, collection):
# type: (Candidate | Requirement) -> str
"""Extract version from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
def get_direct_collection_dependencies(self, collection):
# type: (Candidate | Requirement) -> dict[str, str]
"""Extract deps from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['dependencies'] # type: ignore[return-value]
def get_direct_collection_meta(self, collection):
# type: (Candidate | Requirement) -> dict[str, str | dict[str, str] | list[str] | None]
"""Extract meta from the given on-disk collection artifact."""
try: # FIXME: use unique collection identifier as a cache key?
return self._artifact_meta_cache[collection.src]
except KeyError:
b_artifact_path = self.get_artifact_path(collection)
if collection.is_url or collection.is_file:
collection_meta = _get_meta_from_tar(b_artifact_path)
elif collection.is_dir: # should we just build a coll instead?
# FIXME: what if there's subdirs?
try:
collection_meta = _get_meta_from_dir(b_artifact_path)
except LookupError as lookup_err:
raise_from(
AnsibleError(
'Failed to find the collection dir deps: {err!s}'.
format(err=to_native(lookup_err)),
),
lookup_err,
)
elif collection.is_scm:
collection_meta = {
'name': None,
'namespace': None,
'dependencies': {to_native(b_artifact_path): '*'},
'version': '*',
}
elif collection.is_subdirs:
collection_meta = {
'name': None,
'namespace': None,
# NOTE: Dropping b_artifact_path since it's based on src anyway
'dependencies': dict.fromkeys(
map(to_native, collection.namespace_collection_paths),
'*',
),
'version': '*',
}
else:
raise RuntimeError
self._artifact_meta_cache[collection.src] = collection_meta
return collection_meta
def save_collection_source(self, collection, url, sha256_hash, token, signatures_url, signatures):
# type: (Candidate, str, str, GalaxyToken, str, list[dict[str, str]]) -> None
"""Store collection URL, SHA256 hash and Galaxy API token.
This is a hook that is supposed to be called before attempting to
download Galaxy-based collections with ``get_galaxy_artifact_path()``.
"""
self._galaxy_collection_cache[collection] = url, sha256_hash, token
self._galaxy_collection_origin_cache[collection] = signatures_url, signatures
@classmethod
@contextmanager
def under_tmpdir(
cls,
temp_dir_base, # type: str
validate_certs=True, # type: bool
keyring=None, # type: str
required_signature_count=None, # type: str
ignore_signature_errors=None, # type: list[str]
): # type: (...) -> t.Iterator[ConcreteArtifactsManager]
"""Custom ConcreteArtifactsManager constructor with temp dir.
This method returns a context manager that allocates and cleans
up a temporary directory for caching the collection artifacts
during the dependency resolution process.
"""
# NOTE: Can't use `with tempfile.TemporaryDirectory:`
# NOTE: because it's not in Python 2 stdlib.
temp_path = mkdtemp(
dir=to_bytes(temp_dir_base, errors='surrogate_or_strict'),
)
b_temp_path = to_bytes(temp_path, errors='surrogate_or_strict')
try:
yield cls(
b_temp_path,
validate_certs,
keyring=keyring,
required_signature_count=required_signature_count,
ignore_signature_errors=ignore_signature_errors
)
finally:
rmtree(b_temp_path)
def parse_scm(collection, version):
"""Extract name, version, path and subdir out of the SCM pointer."""
if ',' in collection:
collection, version = collection.split(',', 1)
elif version == '*' or not version:
version = 'HEAD'
if collection.startswith('git+'):
path = collection[4:]
else:
path = collection
path, fragment = urldefrag(path)
fragment = fragment.strip(os.path.sep)
if path.endswith(os.path.sep + '.git'):
name = path.split(os.path.sep)[-2]
elif '://' not in path and '@' not in path:
name = path
else:
name = path.split('/')[-1]
if name.endswith('.git'):
name = name[:-4]
return name, version, path, fragment
def _extract_collection_from_git(repo_url, coll_ver, b_path):
name, version, git_url, fragment = parse_scm(repo_url, coll_ver)
b_checkout_path = mkdtemp(
dir=b_path,
prefix=to_bytes(name, errors='surrogate_or_strict'),
) # type: bytes
try:
git_executable = get_bin_path('git')
except ValueError as err:
raise AnsibleError(
"Could not find git executable to extract the collection from the Git repository `{repo_url!s}`.".
format(repo_url=to_native(git_url))
) from err
# Perform a shallow clone if simply cloning HEAD
if version == 'HEAD':
git_clone_cmd = git_executable, 'clone', '--depth=1', git_url, to_text(b_checkout_path)
else:
git_clone_cmd = git_executable, 'clone', git_url, to_text(b_checkout_path)
# FIXME: '--branch', version
try:
subprocess.check_call(git_clone_cmd)
except subprocess.CalledProcessError as proc_err:
raise_from(
AnsibleError( # should probably be LookupError
'Failed to clone a Git repository from `{repo_url!s}`.'.
format(repo_url=to_native(git_url)),
),
proc_err,
)
git_switch_cmd = git_executable, 'checkout', to_text(version)
try:
subprocess.check_call(git_switch_cmd, cwd=b_checkout_path)
except subprocess.CalledProcessError as proc_err:
raise_from(
AnsibleError( # should probably be LookupError
'Failed to switch a cloned Git repo `{repo_url!s}` '
'to the requested revision `{commitish!s}`.'.
format(
commitish=to_native(version),
repo_url=to_native(git_url),
),
),
proc_err,
)
return (
os.path.join(b_checkout_path, to_bytes(fragment))
if fragment else b_checkout_path
)
# FIXME: use random subdirs while preserving the file names
def _download_file(url, b_path, expected_hash, validate_certs, token=None, timeout=60):
# type: (str, bytes, str | None, bool, GalaxyToken, int) -> bytes
# ^ NOTE: used in download and verify_collections ^
b_tarball_name = to_bytes(
url.rsplit('/', 1)[1], errors='surrogate_or_strict',
)
b_file_name = b_tarball_name[:-len('.tar.gz')]
b_tarball_dir = mkdtemp(
dir=b_path,
prefix=b'-'.join((b_file_name, b'')),
) # type: bytes
b_file_path = os.path.join(b_tarball_dir, b_tarball_name)
display.display("Downloading %s to %s" % (url, to_text(b_tarball_dir)))
# NOTE: Galaxy redirects downloads to S3 which rejects the request
# NOTE: if an Authorization header is attached so don't redirect it
resp = open_url(
to_native(url, errors='surrogate_or_strict'),
validate_certs=validate_certs,
headers=None if token is None else token.headers(),
unredirected_headers=['Authorization'], http_agent=user_agent(),
timeout=timeout
)
with open(b_file_path, 'wb') as download_file: # type: t.BinaryIO
actual_hash = _consume_file(resp, write_to=download_file)
if expected_hash:
display.vvvv(
'Validating downloaded file hash {actual_hash!s} with '
'expected hash {expected_hash!s}'.
format(actual_hash=actual_hash, expected_hash=expected_hash)
)
if expected_hash != actual_hash:
raise AnsibleError('Mismatch artifact hash with downloaded file')
return b_file_path
def _consume_file(read_from, write_to=None):
# type: (t.BinaryIO, t.BinaryIO) -> str
bufsize = 65536
sha256_digest = sha256()
data = read_from.read(bufsize)
while data:
if write_to is not None:
write_to.write(data)
write_to.flush()
sha256_digest.update(data)
data = read_from.read(bufsize)
return sha256_digest.hexdigest()
def _normalize_galaxy_yml_manifest(
galaxy_yml, # type: dict[str, str | list[str] | dict[str, str] | None]
b_galaxy_yml_path, # type: bytes
):
# type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
galaxy_yml_schema = (
get_collections_galaxy_meta_info()
) # type: list[dict[str, t.Any]] # FIXME: <--
# FIXME: 👆maybe precise type: list[dict[str, bool | str | list[str]]]
mandatory_keys = set()
string_keys = set() # type: set[str]
list_keys = set() # type: set[str]
dict_keys = set() # type: set[str]
for info in galaxy_yml_schema:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(list(mandatory_keys) + list(string_keys) + list(list_keys) + list(dict_keys))
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
raise AnsibleError("The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys))))
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val] # type: ignore[list-item]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
# NOTE: `version: null` is only allowed for `galaxy.yml`
# NOTE: and not `MANIFEST.json`. The use-case for it is collections
# NOTE: that generate the version from Git before building a
# NOTE: distributable tarball artifact.
if not galaxy_yml.get('version'):
galaxy_yml['version'] = '*'
return galaxy_yml
def _get_meta_from_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
try:
return _get_meta_from_installed_dir(b_path)
except LookupError:
return _get_meta_from_src_dir(b_path)
def _get_meta_from_src_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
galaxy_yml = os.path.join(b_path, _GALAXY_YAML)
if not os.path.isfile(galaxy_yml):
raise LookupError(
"The collection galaxy.yml path '{path!s}' does not exist.".
format(path=to_native(galaxy_yml))
)
with open(galaxy_yml, 'rb') as manifest_file_obj:
try:
manifest = yaml_load(manifest_file_obj)
except yaml.error.YAMLError as yaml_err:
raise_from(
AnsibleError(
"Failed to parse the galaxy.yml at '{path!s}' with "
'the following error:\n{err_txt!s}'.
format(
path=to_native(galaxy_yml),
err_txt=to_native(yaml_err),
),
),
yaml_err,
)
return _normalize_galaxy_yml_manifest(manifest, galaxy_yml)
def _get_json_from_installed_dir(
b_path, # type: bytes
filename, # type: str
): # type: (...) -> dict
b_json_filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
try:
with open(b_json_filepath, 'rb') as manifest_fd:
b_json_text = manifest_fd.read()
except (IOError, OSError):
raise LookupError(
"The collection {manifest!s} path '{path!s}' does not exist.".
format(
manifest=filename,
path=to_native(b_json_filepath),
)
)
manifest_txt = to_text(b_json_text, errors='surrogate_or_strict')
try:
manifest = json.loads(manifest_txt)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=filename),
)
return manifest
def _get_meta_from_installed_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
manifest = _get_json_from_installed_dir(b_path, MANIFEST_FILENAME)
collection_info = manifest['collection_info']
version = collection_info.get('version')
if not version:
raise AnsibleError(
u'Collection metadata file `{manifest_filename!s}` at `{meta_file!s}` is expected '
u'to have a valid SemVer version value but got {version!s}'.
format(
manifest_filename=MANIFEST_FILENAME,
meta_file=to_text(b_path),
version=to_text(repr(version)),
),
)
return collection_info
def _get_meta_from_tar(
b_path, # type: bytes
): # type: (...) -> dict[str, str | list[str] | dict[str, str] | None]
if not tarfile.is_tarfile(b_path):
raise AnsibleError(
"Collection artifact at '{path!s}' is not a valid tar file.".
format(path=to_native(b_path)),
)
with tarfile.open(b_path, mode='r') as collection_tar: # type: tarfile.TarFile
try:
member = collection_tar.getmember(MANIFEST_FILENAME)
except KeyError:
raise AnsibleError(
"Collection at '{path!s}' does not contain the "
'required file {manifest_file!s}.'.
format(
path=to_native(b_path),
manifest_file=MANIFEST_FILENAME,
),
)
with _tarfile_extract(collection_tar, member) as (_member, member_obj):
if member_obj is None:
raise AnsibleError(
'Collection tar file does not contain '
'member {member!s}'.format(member=MANIFEST_FILENAME),
)
text_content = to_text(
member_obj.read(),
errors='surrogate_or_strict',
)
try:
manifest = json.loads(text_content)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=MANIFEST_FILENAME),
)
return manifest['collection_info']
@contextmanager
def _tarfile_extract(
tar, # type: tarfile.TarFile
member, # type: tarfile.TarInfo
):
# type: (...) -> t.Iterator[tuple[tarfile.TarInfo, t.IO[bytes] | None]]
tar_obj = tar.extractfile(member)
try:
yield member, tar_obj
finally:
if tar_obj is not None:
tar_obj.close()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,560 |
ansible-galaxy fail if dependencies is none
|
### Summary
If `dependencies` in `galaxy.yml` is defined but not assigned any value, the installation would fail.
Please note the `dependencies`.
The installation would fail.
```yml
namespace: my_namespace
name: my_collection
version: 1.0.0
readme: README.md
authors:
- your name <[email protected]>
description: your collection description
license:
- GPL-2.0-or-later
license_file: ''
tags: []
dependencies:
repository: http://example.com/repository
documentation: http://docs.example.com
homepage: http://example.com
issues: http://example.com/issue/tracker
build_ignore: []
```
It works if `dependencies` is assigned to `{}`.
```yml
namespace: my_namespace
name: my_collection
version: 1.0.0
readme: README.md
authors:
- your name <[email protected]>
description: your collection description
license:
- GPL-2.0-or-later
license_file: ''
tags: []
dependencies: {}
repository: http://example.com/repository
documentation: http://docs.example.com
homepage: http://example.com
issues: http://example.com/issue/tracker
build_ignore: []
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become
unstable at any point.
ansible [core 2.14.0.dev0] (devel abdd237de7) last updated 2022/04/19 11:12:55 (GMT +800)
config file = None
configured module search path = ['/Users/jackzhang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jackzhang/src/github/ansible/lib/ansible
ansible collection location = /Users/jackzhang/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jackzhang/.pyenv/versions/v-397/bin/ansible
python version = 3.9.7 (default, Oct 28 2021, 19:01:53) [Clang 12.0.5 (clang-1205.0.22.11)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CONFIG_FILE() = None
```
### OS / Environment
MacOS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```shell
$ ansible-galaxy collection init my_namespace.my_collection
$ cd my_namespace.my_collection
$ gsed -ri 's/^(dependencies:)(.*)$/\1/' galaxy.yml
$ ansible-galaxy collection install -vvv .
```
### Expected Results
I expect the `my_namespace.my_collection` with `dependencies=None` can be installed successfully
### Actual Results
```console
ansible-galaxy collection install -vvvv .
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying
the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become
unstable at any point.
ansible-galaxy [core 2.14.0.dev0] (devel abdd237de7) last updated 2022/04/19 11:12:55 (GMT +800)
config file = None
configured module search path = ['/Users/jackzhang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/jackzhang/src/github/ansible/lib/ansible
ansible collection location = /Users/jackzhang/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/jackzhang/.pyenv/versions/v-397/bin/ansible-galaxy
python version = 3.9.7 (default, Oct 28 2021, 19:01:53) [Clang 12.0.5 (clang-1205.0.22.11)]
jinja version = 3.1.1
libyaml = True
No config file found; using defaults
Starting galaxy collection install process
Found installed collection geerlingguy.testing:1.0.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/geerlingguy/testing'
Found installed collection lablabs.wireguard:0.1.1 at '/Users/jackzhang/.ansible/collections/ansible_collections/lablabs/wireguard'
[WARNING]: Collection at '/Users/jackzhang/.ansible/collections/ansible_collections/foo/bar' does not have a MANIFEST.json
file, nor has it galaxy.yml: cannot detect version.
Found installed collection foo.bar:* at '/Users/jackzhang/.ansible/collections/ansible_collections/foo/bar'
Found installed collection robertdebock.development_environment:2.1.1 at '/Users/jackzhang/.ansible/collections/ansible_collections/robertdebock/development_environment'
Found installed collection containers.podman:1.9.3 at '/Users/jackzhang/.ansible/collections/ansible_collections/containers/podman'
Found installed collection community.docker:2.3.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/docker'
Found installed collection community.general:4.7.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/general'
Found installed collection community.molecule:0.1.0 at '/Users/jackzhang/.ansible/collections/ansible_collections/community/molecule'
Process install dependency map
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'items'
the full traceback was:
Traceback (most recent call last):
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 646, in run
return context.CLIARGS['func']()
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 1300, in execute_install
self._execute_install_collection(
File "/Users/jackzhang/src/github/ansible/lib/ansible/cli/galaxy.py", line 1328, in _execute_install_collection
install_collections(
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/collection/__init__.py", line 683, in install_collections
dependency_map = _resolve_depenency_map(
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/collection/__init__.py", line 1594, in _resolve_depenency_map
return collection_dep_resolver.resolve(
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 347, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 207, in _attempt_to_pin_criterion
criteria = self._get_criteria_to_update(candidate)
File "/Users/jackzhang/.pyenv/versions/3.9.7/envs/v-397/lib/python3.9/site-packages/resolvelib/resolvers.py", line 198, in _get_criteria_to_update
for r in self._p.get_dependencies(candidate):
File "/Users/jackzhang/src/github/ansible/lib/ansible/galaxy/dependency_resolution/providers.py", line 425, in get_dependencies
for dep_name, dep_req in req_map.items()
AttributeError: 'NoneType' object has no attribute 'items'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77560
|
https://github.com/ansible/ansible/pull/77561
|
4faa576ee9c0892668cd150307eae600f7cc4068
|
4d69c09695c8f78b95edf51314999be3c19b62eb
| 2022-04-19T07:59:36Z |
python
| 2022-04-19T19:54:45Z |
test/units/galaxy/test_collection_install.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import json
import os
import pytest
import re
import shutil
import stat
import tarfile
import yaml
from io import BytesIO, StringIO
from mock import MagicMock, patch
from unittest import mock
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import collection, api, dependency_resolution
from ansible.galaxy.dependency_resolution.dataclasses import Candidate, Requirement
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.utils import context_objects as co
from ansible.utils.display import Display
class RequirementCandidates():
def __init__(self):
self.candidates = []
def func_wrapper(self, func):
def run(*args, **kwargs):
self.candidates = func(*args, **kwargs)
return self.candidates
return run
def call_galaxy_cli(args):
orig = co.GlobalCLIArgs._Singleton__instance
co.GlobalCLIArgs._Singleton__instance = None
try:
GalaxyCLI(args=['ansible-galaxy', 'collection'] + args).run()
finally:
co.GlobalCLIArgs._Singleton__instance = orig
def artifact_json(namespace, name, version, dependencies, server):
json_str = json.dumps({
'artifact': {
'filename': '%s-%s-%s.tar.gz' % (namespace, name, version),
'sha256': '2d76f3b8c4bab1072848107fb3914c345f71a12a1722f25c08f5d3f51f4ab5fd',
'size': 1234,
},
'download_url': '%s/download/%s-%s-%s.tar.gz' % (server, namespace, name, version),
'metadata': {
'namespace': namespace,
'name': name,
'dependencies': dependencies,
},
'version': version
})
return to_text(json_str)
def artifact_versions_json(namespace, name, versions, galaxy_api, available_api_versions=None):
results = []
available_api_versions = available_api_versions or {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
for version in versions:
results.append({
'href': '%s/api/%s/%s/%s/versions/%s/' % (galaxy_api.api_server, api_version, namespace, name, version),
'version': version,
})
if api_version == 'v2':
json_str = json.dumps({
'count': len(versions),
'next': None,
'previous': None,
'results': results
})
if api_version == 'v3':
response = {'meta': {'count': len(versions)},
'data': results,
'links': {'first': None,
'last': None,
'next': None,
'previous': None},
}
json_str = json.dumps(response)
return to_text(json_str)
def error_json(galaxy_api, errors_to_return=None, available_api_versions=None):
errors_to_return = errors_to_return or []
available_api_versions = available_api_versions or {}
response = {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
if api_version == 'v2':
assert len(errors_to_return) <= 1
if errors_to_return:
response = errors_to_return[0]
if api_version == 'v3':
response['errors'] = errors_to_return
json_str = json.dumps(response)
return to_text(json_str)
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(request, tmp_path_factory):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton_path = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
collection_path = os.path.join(test_dir, namespace, collection)
call_galaxy_cli(['init', '%s.%s' % (namespace, collection), '-c', '--init-path', test_dir,
'--collection-skeleton', skeleton_path])
dependencies = getattr(request, 'param', None)
if dependencies:
galaxy_yml = os.path.join(collection_path, 'galaxy.yml')
with open(galaxy_yml, 'rb+') as galaxy_obj:
existing_yaml = yaml.safe_load(galaxy_obj)
existing_yaml['dependencies'] = dependencies
galaxy_obj.seek(0)
galaxy_obj.write(to_bytes(yaml.safe_dump(existing_yaml)))
galaxy_obj.truncate()
# Create a file with +x in the collection so we can test the permissions
execute_path = os.path.join(collection_path, 'runme.sh')
with open(execute_path, mode='wb') as fd:
fd.write(b"echo hi")
os.chmod(execute_path, os.stat(execute_path).st_mode | stat.S_IEXEC)
call_galaxy_cli(['build', collection_path, '--output-path', test_dir])
collection_tar = os.path.join(test_dir, '%s-%s-0.1.0.tar.gz' % (namespace, collection))
return to_bytes(collection_path), to_bytes(collection_tar)
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com')
galaxy_api.get_collection_signatures = MagicMock(return_value=[])
return galaxy_api
def test_concrete_artifact_manager_scm_no_executable(monkeypatch):
url = 'https://github.com/org/repo'
version = 'commitish'
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
error = re.escape(
"Could not find git executable to extract the collection from the Git repository `https://github.com/org/repo`"
)
with mock.patch.dict(os.environ, {"PATH": ""}):
with pytest.raises(AnsibleError, match=error):
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'commitish', False),
('https://github.com/org/repo,commitish', None, False),
('https://github.com/org/repo/,commitish', None, True),
('https://github.com/org/repo#,commitish', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
git_executable = get_bin_path('git')
clone_cmd = (git_executable, 'clone', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == (git_executable, 'checkout', 'commitish')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'HEAD', False),
('https://github.com/org/repo,HEAD', None, False),
('https://github.com/org/repo/,HEAD', None, True),
('https://github.com/org/repo#,HEAD', None, False),
('https://github.com/org/repo', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd_shallow(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
git_executable = get_bin_path('git')
shallow_clone_cmd = (git_executable, 'clone', '--depth=1', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == shallow_clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == (git_executable, 'checkout', 'HEAD')
def test_build_requirement_from_path(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == collection_artifact[0]
assert actual.ver == u'0.1.0'
@pytest.mark.parametrize('version', ['1.1.1', '1.1.0', '1.0.0'])
def test_build_requirement_from_path_with_manifest(version, collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': version,
'dependencies': {
'ansible_namespace.collection': '*'
}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == to_text(version)
def test_build_requirement_from_path_invalid_manifest(collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(b"not json")
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_artifact_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# a collection artifact should always contain a valid version
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = (
'^Collection metadata file `.*` at `.*` is expected to have a valid SemVer '
'version value but got {empty_unicode_string!r}$'.
format(empty_unicode_string=u'')
)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_requirement_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# version may be falsey/arbitrary strings for collections in development
manifest_path = os.path.join(collection_artifact[0], b'galaxy.yml')
metadata = {
'authors': ['Ansible'],
'readme': 'README.md',
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {},
}
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(yaml.safe_dump(metadata)))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == u'*'
def test_build_requirement_from_tar(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_requirement_dict({'name': to_text(collection_artifact[1])}, concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == to_text(collection_artifact[1])
assert actual.ver == u'0.1.0'
def test_build_requirement_from_tar_fail_not_tar(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
test_file = os.path.join(test_dir, b'fake.tar.gz')
with open(test_file, 'wb') as test_obj:
test_obj.write(b"\x00\x01\x02\x03")
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection artifact at '%s' is not a valid tar file." % to_native(test_file)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(test_file)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'files': [],
'format': 1,
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('FILES.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection at '%s' does not contain the required file MANIFEST.json." % to_native(tar_path)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_files(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'collection_info': {},
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
with pytest.raises(KeyError, match='namespace'):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_invalid_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = b"not a json"
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_name(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.1.9', '2.1.10']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_version_metadata = MagicMock(
namespace='namespace', name='collection',
version='2.1.10', artifact_sha256='', dependencies={}
)
monkeypatch.setattr(api.GalaxyAPI, 'get_collection_version_metadata', mock_version_metadata)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
collections = ['namespace.collection']
requirements_file = None
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', collections[0]])
requirements = cli._require_one_of_collections_requirements(
collections, requirements_file, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.ver == u'2.1.10'
assert actual.src == galaxy_server
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1-beta.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:2.0.1-beta.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:2.0.1-beta.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1-beta.1'
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1-beta.1')
def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.0.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
broken_server = copy.copy(galaxy_server)
broken_server.api_server = 'https://broken.com/'
mock_version_list = MagicMock()
mock_version_list.return_value = []
monkeypatch.setattr(broken_server, 'get_collection_versions', mock_version_list)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>1.0.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [broken_server, galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'1.0.3'
assert mock_version_list.call_count == 1
assert mock_version_list.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_missing(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.return_value = []
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n* namespace.collection:* (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 401, 'msg', {},
StringIO()), "error")
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "error (HTTP Code: 401, Message: msg)"
with pytest.raises(api.GalaxyError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, False, False, False)
def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.0', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:==2.0.0'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:==2.0.0'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.0'
assert [c.ver for c in matches.candidates] == [u'2.0.0']
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.0')
def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>=2.0.1,<2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>=2.0.1,<2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert [c.ver for c in matches.candidates] == [u'2.0.1']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1')
def test_build_requirement_from_name_multiple_version_results(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2', '2.0.3', '2.0.4', '2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.5'
# should be ordered latest to earliest
assert [c.ver for c in matches.candidates] == [u'2.0.5', u'2.0.4', u'2.0.3', u'2.0.1', u'2.0.0']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=2.0.5 (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info_return = [
api.CollectionVersionMetadata('parent', 'collection', '2.0.5', None, None, {'namespace.collection': '!=1.0.0'}, None, None),
api.CollectionVersionMetadata('namespace', 'collection', '1.0.0', None, None, {}, None, None),
]
mock_get_info = MagicMock(side_effect=mock_get_info_return)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(side_effect=[['2.0.5'], ['1.0.0']])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'parent.collection:2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['parent.collection:2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=1.0.0 (dependency of parent.collection:2.0.5)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_install_installed_collection(monkeypatch, tmp_path_factory, galaxy_server):
mock_installed_collections = MagicMock(return_value=[Candidate('namespace.collection', '1.2.3', None, 'dir', None)])
monkeypatch.setattr(collection, 'find_existing_collections', mock_installed_collections)
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.2.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(return_value=['1.2.3', '1.3.0'])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
cli.run()
expected = "Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`."
assert mock_display.mock_calls[1][1][0] == expected
def test_install_collection(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
collection_tar = collection_artifact[1]
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
output_path = os.path.join(os.path.split(collection_tar)[0])
collection_path = os.path.join(output_path, b'ansible_namespace', b'collection')
os.makedirs(os.path.join(collection_path, b'delete_me')) # Create a folder to verify the install cleans out the dir
candidate = Candidate('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)
collection.install(candidate, to_text(output_path), concrete_artifact_cm)
# Ensure the temp directory is empty, nothing is left behind
assert os.listdir(temp_path) == []
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'plugins')).st_mode) == 0o0755
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'README.md')).st_mode) == 0o0644
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'runme.sh')).st_mode) == 0o0755
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
def test_install_collection_with_download(galaxy_server, collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
shutil.rmtree(collection_path)
collections_dir = ('%s' % os.path.sep).join(to_text(collection_path).split('%s' % os.path.sep)[:-2])
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
mock_download = MagicMock()
mock_download.return_value = collection_tar
monkeypatch.setattr(concrete_artifact_cm, 'get_galaxy_artifact_path', mock_download)
req = Candidate('ansible_namespace.collection', '0.1.0', 'https://downloadme.com', 'galaxy', None)
collection.install(req, to_text(collections_dir), concrete_artifact_cm)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
assert mock_download.call_count == 1
assert mock_download.mock_calls[0][1][0].src == 'https://downloadme.com'
assert mock_download.mock_calls[0][1][0].type == 'galaxy'
def test_install_collections_from_tar(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
def test_install_collections_existing_without_force(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
assert os.path.isdir(collection_path)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'README.md', b'docs', b'galaxy.yml', b'playbooks', b'plugins', b'roles', b'runme.sh']
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 1
assert display_msgs[0] == 'Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`.'
for msg in display_msgs:
assert 'WARNING' not in msg
def test_install_missing_metadata_warning(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
for file in [b'MANIFEST.json', b'galaxy.yml']:
b_path = os.path.join(collection_path, file)
if os.path.isfile(b_path):
os.unlink(b_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert 'WARNING' in display_msgs[0]
# Makes sure we don't get stuck in some recursive loop
@pytest.mark.parametrize('collection_artifact', [
{'ansible_namespace.collection': '>=0.0.1'},
], indirect=True)
def test_install_collection_with_circular_dependency(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
assert display_msgs[3] == "ansible_namespace.collection:0.1.0 was installed successfully"
@pytest.mark.parametrize(
"signatures,required_successful_count,ignore_errors,expected_success",
[
([], 'all', [], True),
(["good_signature"], 'all', [], True),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
# This is expected to succeed because ignored does not increment failed signatures.
# "all" signatures is not a specific number, so all == no (non-ignored) signatures in this case.
([collection.gpg.GpgBadArmor(status='failed')], 'all', ["BADARMOR"], True),
([collection.gpg.GpgBadArmor(status='failed'), "good_signature"], 'all', ["BADARMOR"], True),
([], '+all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], '+all', ["BADARMOR"], False),
([], '1', [], True),
([], '+1', [], False),
(["good_signature"], '2', [], False),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', [], False),
# This is expected to fail because ignored does not increment successful signatures.
# 2 signatures are required, but only 1 is successful.
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', ["BADARMOR"], False),
(["good_signature", "good_signature"], '2', [], True),
]
)
def test_verify_file_signatures(signatures, required_successful_count, ignore_errors, expected_success):
# type: (List[bool], int, bool, bool) -> None
def gpg_error_generator(results):
for result in results:
if isinstance(result, collection.gpg.GpgBaseError):
yield result
fqcn = 'ns.coll'
manifest_file = 'MANIFEST.json'
keyring = '~/.ansible/pubring.kbx'
with patch.object(collection, 'run_gpg_verify', MagicMock(return_value=("somestdout", 0,))):
with patch.object(collection, 'parse_gpg_errors', MagicMock(return_value=gpg_error_generator(signatures))):
assert collection.verify_file_signatures(
fqcn,
manifest_file,
signatures,
keyring,
required_successful_count,
ignore_errors
) == expected_success
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,582 |
Git module not accepting host key
|
### Summary
If I create a new user and install ansible with:
```
[root@awx-devel ~]# useradd tester
[root@awx-devel ~]# sudo su - tester
[tester@awx-devel ~]$ python3 -m venv ~/ansible_git_test_venv
[tester@awx-devel ~]$ source ~/ansible_git_test_venv/bin/activate
(ansible_git_test_venv) [tester@awx-devel ~]$ pip install ansible==6.0.0a1
```
And confirm I have the latest changes to the git module:
```
(ansible_git_test_venv) [tester@awx-devel ~]$ ansible-doc git | grep "Be aware that this disables a protection against MITM attacks"
Be aware that this disables a protection against MITM attacks.
```
Then create a simple playbook like:
```
(ansible_git_test_venv) [tester@awx-devel ~]$ cat <<__EOF__ >./test.yml
> ---
> - name: Test git
> hosts: localhost
> connection: local
> tasks:
> - name: update project using git
> git:
> dest: "temp"
> repo: "[email protected]:test/private_repo.git"
> version: "HEAD"
> force: "False"
> accept_hostkey: True
> register: git_result
>
> - debug:
> var: git_result
> __EOF__
```
Running this playbook causes an issue:
```
(ansible_git_test_venv) [tester@awx-devel ~]$ ansible-playbook test.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test git] ****************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************
ok: [localhost]
TASK [update project using git] ****************************************************************************************************************************
The authenticity of host 'github.com (140.82.112.4)' can't be established.
ED25519 key fingerprint is SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
It's not "respecting" the accept_hostkey parameter.
```
After doing some debugging I can see that the env var is correctly being set and that moves down into the subprocess running git but I can't find any documentation that the git client actually does anything with an environment variable called GIT_SSH_OPTS.
### Issue Type
Bug Report
### Component Name
git
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0b0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tester/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tester/ansible_git_test_venv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/tester/.ansible/collections:/usr/share/ansible/collections
executable location = /home/tester/ansible_git_test_venv/bin/ansible
python version = 3.10.3 (main, Mar 18 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8, Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
---
- name: Test git
hosts: localhost
connection: local
tasks:
- name: update project using git
git:
dest: "temp"
repo: "[email protected]:test/private_repo.git"
version: "HEAD"
force: "False"
accept_hostkey: True
register: git_result
- debug:
var: git_result
### Expected Results
I expected to not get prompted on the host key.
### Actual Results
```console
TASK [update project using git] ****************************************************************************************************************************
The authenticity of host 'github.com (140.82.112.4)' can't be established.
ED25519 key fingerprint is SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
It's not "respecting" the accept_hostkey parameter.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77582
|
https://github.com/ansible/ansible/pull/77588
|
1b947eaf92b6833d2a4fd019a30d7b85406f1778
|
d06d99cbb6a61db043436092663de8b20f6d2017
| 2022-04-20T13:35:06Z |
python
| 2022-04-21T20:44:42Z |
changelogs/fragments/git_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,582 |
Git module not accepting host key
|
### Summary
If I create a new user and install ansible with:
```
[root@awx-devel ~]# useradd tester
[root@awx-devel ~]# sudo su - tester
[tester@awx-devel ~]$ python3 -m venv ~/ansible_git_test_venv
[tester@awx-devel ~]$ source ~/ansible_git_test_venv/bin/activate
(ansible_git_test_venv) [tester@awx-devel ~]$ pip install ansible==6.0.0a1
```
And confirm I have the latest changes to the git module:
```
(ansible_git_test_venv) [tester@awx-devel ~]$ ansible-doc git | grep "Be aware that this disables a protection against MITM attacks"
Be aware that this disables a protection against MITM attacks.
```
Then create a simple playbook like:
```
(ansible_git_test_venv) [tester@awx-devel ~]$ cat <<__EOF__ >./test.yml
> ---
> - name: Test git
> hosts: localhost
> connection: local
> tasks:
> - name: update project using git
> git:
> dest: "temp"
> repo: "[email protected]:test/private_repo.git"
> version: "HEAD"
> force: "False"
> accept_hostkey: True
> register: git_result
>
> - debug:
> var: git_result
> __EOF__
```
Running this playbook causes an issue:
```
(ansible_git_test_venv) [tester@awx-devel ~]$ ansible-playbook test.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test git] ****************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************
ok: [localhost]
TASK [update project using git] ****************************************************************************************************************************
The authenticity of host 'github.com (140.82.112.4)' can't be established.
ED25519 key fingerprint is SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
It's not "respecting" the accept_hostkey parameter.
```
After doing some debugging I can see that the env var is correctly being set and that moves down into the subprocess running git but I can't find any documentation that the git client actually does anything with an environment variable called GIT_SSH_OPTS.
### Issue Type
Bug Report
### Component Name
git
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.0b0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tester/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tester/ansible_git_test_venv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/tester/.ansible/collections:/usr/share/ansible/collections
executable location = /home/tester/ansible_git_test_venv/bin/ansible
python version = 3.10.3 (main, Mar 18 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
RHEL 8, Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
---
- name: Test git
hosts: localhost
connection: local
tasks:
- name: update project using git
git:
dest: "temp"
repo: "[email protected]:test/private_repo.git"
version: "HEAD"
force: "False"
accept_hostkey: True
register: git_result
- debug:
var: git_result
### Expected Results
I expected to not get prompted on the host key.
### Actual Results
```console
TASK [update project using git] ****************************************************************************************************************************
The authenticity of host 'github.com (140.82.112.4)' can't be established.
ED25519 key fingerprint is SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?
It's not "respecting" the accept_hostkey parameter.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77582
|
https://github.com/ansible/ansible/pull/77588
|
1b947eaf92b6833d2a4fd019a30d7b85406f1778
|
d06d99cbb6a61db043436092663de8b20f6d2017
| 2022-04-20T13:35:06Z |
python
| 2022-04-21T20:44:42Z |
lib/ansible/modules/git.py
|
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: git
author:
- "Ansible Core Team"
- "Michael DeHaan"
version_added: "0.0.1"
short_description: Deploy software (or files) from git checkouts
description:
- Manage I(git) checkouts of repositories to deploy files or software.
extends_documentation_fragment: action_common_attributes
options:
repo:
description:
- git, SSH, or HTTP(S) protocol address of the git repository.
type: str
required: true
aliases: [ name ]
dest:
description:
- The path of where the repository should be checked out. This
is equivalent to C(git clone [repo_url] [directory]). The repository
named in I(repo) is not appended to this path and the destination directory must be empty. This
parameter is required, unless I(clone) is set to C(no).
type: path
required: true
version:
description:
- What version of the repository to check out. This can be
the literal string C(HEAD), a branch name, a tag name.
It can also be a I(SHA-1) hash, in which case I(refspec) needs
to be specified if the given revision is not already available.
type: str
default: "HEAD"
accept_hostkey:
description:
- Will ensure or not that "-o StrictHostKeyChecking=no" is present as an ssh option.
- Be aware that this disables a protection against MITM attacks.
- Those using OpenSSH >= 7.5 might want to set I(ssh_opt) to 'StrictHostKeyChecking=accept-new'
instead, it does not remove the MITM issue but it does restrict it to the first attempt.
type: bool
default: 'no'
version_added: "1.5"
accept_newhostkey:
description:
- As of OpenSSH 7.5, "-o StrictHostKeyChecking=accept-new" can be
used which is safer and will only accepts host keys which are
not present or are the same. if C(yes), ensure that
"-o StrictHostKeyChecking=accept-new" is present as an ssh option.
type: bool
default: 'no'
version_added: "2.12"
ssh_opts:
description:
- Options git will pass to ssh when used as protocol, it works via GIT_SSH_OPTIONS.
- For older versions it appends GIT_SSH_OPTIONS to GIT_SSH/GIT_SSH_COMMAND.
- Other options can add to this list, like I(key_file) and I(accept_hostkey).
- An example value could be "-o StrictHostKeyChecking=no" (although this particular
option is better set by I(accept_hostkey)).
- The module ensures that 'BatchMode=yes' is always present to avoid prompts.
type: str
version_added: "1.5"
key_file:
description:
- Specify an optional private key file path, on the target host, to use for the checkout.
- This ensures 'IdentitiesOnly=yes' is present in ssh_opts.
type: path
version_added: "1.5"
reference:
description:
- Reference repository (see "git clone --reference ...").
version_added: "1.4"
remote:
description:
- Name of the remote.
type: str
default: "origin"
refspec:
description:
- Add an additional refspec to be fetched.
If version is set to a I(SHA-1) not reachable from any branch
or tag, this option may be necessary to specify the ref containing
the I(SHA-1).
Uses the same syntax as the C(git fetch) command.
An example value could be "refs/meta/config".
type: str
version_added: "1.9"
force:
description:
- If C(yes), any modified files in the working
repository will be discarded. Prior to 0.7, this was always
C(yes) and could not be disabled. Prior to 1.9, the default was
C(yes).
type: bool
default: 'no'
version_added: "0.7"
depth:
description:
- Create a shallow clone with a history truncated to the specified
number or revisions. The minimum possible value is C(1), otherwise
ignored. Needs I(git>=1.9.1) to work correctly.
type: int
version_added: "1.2"
clone:
description:
- If C(no), do not clone the repository even if it does not exist locally.
type: bool
default: 'yes'
version_added: "1.9"
update:
description:
- If C(no), do not retrieve new revisions from the origin repository.
- Operations like archive will work on the existing (old) repository and might
not respond to changes to the options version or remote.
type: bool
default: 'yes'
version_added: "1.2"
executable:
description:
- Path to git executable to use. If not supplied,
the normal mechanism for resolving binary paths will be used.
type: path
version_added: "1.4"
bare:
description:
- If C(yes), repository will be created as a bare repo, otherwise
it will be a standard repo with a workspace.
type: bool
default: 'no'
version_added: "1.4"
umask:
description:
- The umask to set before doing any checkouts, or any other
repository maintenance.
type: raw
version_added: "2.2"
recursive:
description:
- If C(no), repository will be cloned without the --recursive
option, skipping sub-modules.
type: bool
default: 'yes'
version_added: "1.6"
single_branch:
description:
- Clone only the history leading to the tip of the specified revision.
type: bool
default: 'no'
version_added: '2.11'
track_submodules:
description:
- If C(yes), submodules will track the latest commit on their
master branch (or other branch specified in .gitmodules). If
C(no), submodules will be kept at the revision specified by the
main project. This is equivalent to specifying the --remote flag
to git submodule update.
type: bool
default: 'no'
version_added: "1.8"
verify_commit:
description:
- If C(yes), when cloning or checking out a I(version) verify the
signature of a GPG signed commit. This requires git version>=2.1.0
to be installed. The commit MUST be signed and the public key MUST
be present in the GPG keyring.
type: bool
default: 'no'
version_added: "2.0"
archive:
description:
- Specify archive file path with extension. If specified, creates an
archive file of the specified format containing the tree structure
for the source tree.
Allowed archive formats ["zip", "tar.gz", "tar", "tgz"].
- This will clone and perform git archive from local directory as not
all git servers support git archive.
type: path
version_added: "2.4"
archive_prefix:
description:
- Specify a prefix to add to each file path in archive. Requires I(archive) to be specified.
version_added: "2.10"
type: str
separate_git_dir:
description:
- The path to place the cloned repository. If specified, Git repository
can be separated from working tree.
type: path
version_added: "2.7"
gpg_whitelist:
description:
- A list of trusted GPG fingerprints to compare to the fingerprint of the
GPG-signed commit.
- Only used when I(verify_commit=yes).
- Use of this feature requires Git 2.6+ due to its reliance on git's C(--raw) flag to C(verify-commit) and C(verify-tag).
type: list
elements: str
default: []
version_added: "2.9"
requirements:
- git>=1.7.1 (the command line tool)
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
notes:
- "If the task seems to be hanging, first verify remote host is in C(known_hosts).
SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt,
one solution is to use the option accept_hostkey. Another solution is to
add the remote host public key in C(/etc/ssh/ssh_known_hosts) before calling
the git module, with the following command: ssh-keyscan -H remote_host.com >> /etc/ssh/ssh_known_hosts."
'''
EXAMPLES = '''
- name: Git checkout
ansible.builtin.git:
repo: 'https://foosball.example.org/path/to/repo.git'
dest: /srv/checkout
version: release-0.22
- name: Read-write git checkout from github
ansible.builtin.git:
repo: [email protected]:mylogin/hello.git
dest: /home/mylogin/hello
- name: Just ensuring the repo checkout exists
ansible.builtin.git:
repo: 'https://foosball.example.org/path/to/repo.git'
dest: /srv/checkout
update: no
- name: Just get information about the repository whether or not it has already been cloned locally
ansible.builtin.git:
repo: 'https://foosball.example.org/path/to/repo.git'
dest: /srv/checkout
clone: no
update: no
- name: Checkout a github repo and use refspec to fetch all pull requests
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
refspec: '+refs/pull/*:refs/heads/*'
- name: Create git archive from repo
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
archive: /tmp/ansible-examples.zip
- name: Clone a repo with separate git directory
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
separate_git_dir: /src/ansible-examples.git
- name: Example clone of a single branch
ansible.builtin.git:
repo: https://github.com/ansible/ansible-examples.git
dest: /src/ansible-examples
single_branch: yes
version: master
- name: Avoid hanging when http(s) password is missing
ansible.builtin.git:
repo: https://github.com/ansible/could-be-a-private-repo
dest: /src/from-private-repo
environment:
GIT_TERMINAL_PROMPT: 0 # reports "terminal prompts disabled" on missing password
# or GIT_ASKPASS: /bin/true # for git before version 2.3.0, reports "Authentication failed" on missing password
'''
RETURN = '''
after:
description: Last commit revision of the repository retrieved during the update.
returned: success
type: str
sample: 4c020102a9cd6fe908c9a4a326a38f972f63a903
before:
description: Commit revision before the repository was updated, "null" for new repository.
returned: success
type: str
sample: 67c04ebe40a003bda0efb34eacfb93b0cafdf628
remote_url_changed:
description: Contains True or False whether or not the remote URL was changed.
returned: success
type: bool
sample: True
warnings:
description: List of warnings if requested features were not available due to a too old git version.
returned: error
type: str
sample: git version is too old to fully support the depth argument. Falling back to full checkouts.
git_dir_now:
description: Contains the new path of .git directory if it is changed.
returned: success
type: str
sample: /path/to/new/git/dir
git_dir_before:
description: Contains the original path of .git directory if it is changed.
returned: success
type: str
sample: /path/to/old/git/dir
'''
import filecmp
import os
import re
import shlex
import stat
import sys
import shutil
import tempfile
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.six import b, string_types
def relocate_repo(module, result, repo_dir, old_repo_dir, worktree_dir):
if os.path.exists(repo_dir):
module.fail_json(msg='Separate-git-dir path %s already exists.' % repo_dir)
if worktree_dir:
dot_git_file_path = os.path.join(worktree_dir, '.git')
try:
shutil.move(old_repo_dir, repo_dir)
with open(dot_git_file_path, 'w') as dot_git_file:
dot_git_file.write('gitdir: %s' % repo_dir)
result['git_dir_before'] = old_repo_dir
result['git_dir_now'] = repo_dir
except (IOError, OSError) as err:
# if we already moved the .git dir, roll it back
if os.path.exists(repo_dir):
shutil.move(repo_dir, old_repo_dir)
module.fail_json(msg=u'Unable to move git dir. %s' % to_text(err))
def head_splitter(headfile, remote, module=None, fail_on_error=False):
'''Extract the head reference'''
# https://github.com/ansible/ansible-modules-core/pull/907
res = None
if os.path.exists(headfile):
rawdata = None
try:
f = open(headfile, 'r')
rawdata = f.readline()
f.close()
except Exception:
if fail_on_error and module:
module.fail_json(msg="Unable to read %s" % headfile)
if rawdata:
try:
rawdata = rawdata.replace('refs/remotes/%s' % remote, '', 1)
refparts = rawdata.split(' ')
newref = refparts[-1]
nrefparts = newref.split('/', 2)
res = nrefparts[-1].rstrip('\n')
except Exception:
if fail_on_error and module:
module.fail_json(msg="Unable to split head from '%s'" % rawdata)
return res
def unfrackgitpath(path):
if path is None:
return None
# copied from ansible.utils.path
return os.path.normpath(os.path.realpath(os.path.expanduser(os.path.expandvars(path))))
def get_submodule_update_params(module, git_path, cwd):
# or: git submodule [--quiet] update [--init] [-N|--no-fetch]
# [-f|--force] [--rebase] [--reference <repository>] [--merge]
# [--recursive] [--] [<path>...]
params = []
# run a bad submodule command to get valid params
cmd = "%s submodule update --help" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=cwd)
lines = stderr.split('\n')
update_line = None
for line in lines:
if 'git submodule [--quiet] update ' in line:
update_line = line
if update_line:
update_line = update_line.replace('[', '')
update_line = update_line.replace(']', '')
update_line = update_line.replace('|', ' ')
parts = shlex.split(update_line)
for part in parts:
if part.startswith('--'):
part = part.replace('--', '')
params.append(part)
return params
def write_ssh_wrapper(module):
'''
This writes an shell wrapper for ssh options to be used with git
this is only relevant for older versions of gitthat cannot
handle the options themselves. Returns path to the script
'''
try:
# make sure we have full permission to the module_dir, which
# may not be the case if we're sudo'ing to a non-root user
if os.access(module.tmpdir, os.W_OK | os.R_OK | os.X_OK):
fd, wrapper_path = tempfile.mkstemp(prefix=module.tmpdir + '/')
else:
raise OSError
except (IOError, OSError):
fd, wrapper_path = tempfile.mkstemp()
# use existing git_ssh/ssh_command, fallback to 'ssh'
template = b("""#!/bin/sh
%s $GIT_SSH_OPTS
""" % os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh')))
# write it
with os.fdopen(fd, 'w+b') as fh:
fh.write(template)
# set execute
st = os.stat(wrapper_path)
os.chmod(wrapper_path, st.st_mode | stat.S_IEXEC)
module.debug('Wrote temp git ssh wrapper (%s): %s' % (wrapper_path, template))
# ensure we cleanup after ourselves
module.add_cleanup_file(path=wrapper_path)
return wrapper_path
def set_git_ssh_env(key_file, ssh_opts, git_version, module):
'''
use environment variables to configure git's ssh execution,
which varies by version but this functino should handle all.
'''
# initialise to existing ssh opts and/or append user provided
if ssh_opts is None:
ssh_opts = os.environ.get('GIT_SSH_OPTS', '')
else:
ssh_opts = os.environ.get('GIT_SSH_OPTS', '') + ' ' + ssh_opts
# hostkey acceptance
accept_key = "StrictHostKeyChecking=no"
if module.params['accept_hostkey'] and accept_key not in ssh_opts:
ssh_opts += " -o %s" % accept_key
# avoid prompts
force_batch = 'BatchMode=yes'
if force_batch not in ssh_opts:
ssh_opts += ' -o %s' % (force_batch)
# deal with key file
if key_file:
os.environ["GIT_KEY"] = key_file
ikey = 'IdentitiesOnly=yes'
if ikey not in ssh_opts:
ssh_opts += ' -o %s' % ikey
# we should always have finalized string here.
os.environ["GIT_SSH_OPTS"] = ssh_opts
# older than 2.3 does not know how to use git_ssh_opts,
# so we force it into ssh command var
if git_version < LooseVersion('2.3.0'):
# these versions don't support GIT_SSH_OPTS so have to write wrapper
wrapper = write_ssh_wrapper(module)
# force use of git_ssh_opts via wrapper
os.environ["GIT_SSH"] = wrapper
# same as above but older git uses git_ssh_command
os.environ["GIT_SSH_COMMAND"] = wrapper
def get_version(module, git_path, dest, ref="HEAD"):
''' samples the version of the git repo '''
cmd = "%s rev-parse %s" % (git_path, ref)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
sha = to_native(stdout).rstrip('\n')
return sha
def ssh_supports_acceptnewhostkey(module):
try:
ssh_path = get_bin_path('ssh')
except ValueError as err:
module.fail_json(
msg='Remote host is missing ssh command, so you cannot '
'use acceptnewhostkey option.', details=to_text(err))
supports_acceptnewhostkey = True
cmd = [ssh_path, '-o', 'StrictHostKeyChecking=accept-new', '-V']
rc, stdout, stderr = module.run_command(cmd)
if rc != 0:
supports_acceptnewhostkey = False
return supports_acceptnewhostkey
def get_submodule_versions(git_path, module, dest, version='HEAD'):
cmd = [git_path, 'submodule', 'foreach', git_path, 'rev-parse', version]
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(
msg='Unable to determine hashes of submodules',
stdout=out,
stderr=err,
rc=rc)
submodules = {}
subm_name = None
for line in out.splitlines():
if line.startswith("Entering '"):
subm_name = line[10:-1]
elif len(line.strip()) == 40:
if subm_name is None:
module.fail_json()
submodules[subm_name] = line.strip()
subm_name = None
else:
module.fail_json(msg='Unable to parse submodule hash line: %s' % line.strip())
if subm_name is not None:
module.fail_json(msg='Unable to find hash for submodule: %s' % subm_name)
return submodules
def clone(git_path, module, repo, dest, remote, depth, version, bare,
reference, refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch):
''' makes a new git repo if it does not already exist '''
dest_dirname = os.path.dirname(dest)
try:
os.makedirs(dest_dirname)
except Exception:
pass
cmd = [git_path, 'clone']
if bare:
cmd.append('--bare')
else:
cmd.extend(['--origin', remote])
is_branch_or_tag = is_remote_branch(git_path, module, dest, repo, version) or is_remote_tag(git_path, module, dest, repo, version)
if depth:
if version == 'HEAD' or refspec:
cmd.extend(['--depth', str(depth)])
elif is_branch_or_tag:
cmd.extend(['--depth', str(depth)])
cmd.extend(['--branch', version])
else:
# only use depth if the remote object is branch or tag (i.e. fetchable)
module.warn("Ignoring depth argument. "
"Shallow clones are only available for "
"HEAD, branches, tags or in combination with refspec.")
if reference:
cmd.extend(['--reference', str(reference)])
if single_branch:
if git_version_used is None:
module.fail_json(msg='Cannot find git executable at %s' % git_path)
if git_version_used < LooseVersion('1.7.10'):
module.warn("git version '%s' is too old to use 'single-branch'. Ignoring." % git_version_used)
else:
cmd.append("--single-branch")
if is_branch_or_tag:
cmd.extend(['--branch', version])
needs_separate_git_dir_fallback = False
if separate_git_dir:
if git_version_used is None:
module.fail_json(msg='Cannot find git executable at %s' % git_path)
if git_version_used < LooseVersion('1.7.5'):
# git before 1.7.5 doesn't have separate-git-dir argument, do fallback
needs_separate_git_dir_fallback = True
else:
cmd.append('--separate-git-dir=%s' % separate_git_dir)
cmd.extend([repo, dest])
module.run_command(cmd, check_rc=True, cwd=dest_dirname)
if needs_separate_git_dir_fallback:
relocate_repo(module, result, separate_git_dir, os.path.join(dest, ".git"), dest)
if bare and remote != 'origin':
module.run_command([git_path, 'remote', 'add', remote, repo], check_rc=True, cwd=dest)
if refspec:
cmd = [git_path, 'fetch']
if depth:
cmd.extend(['--depth', str(depth)])
cmd.extend([remote, refspec])
module.run_command(cmd, check_rc=True, cwd=dest)
if verify_commit:
verify_commit_sign(git_path, module, dest, version, gpg_whitelist)
def has_local_mods(module, git_path, dest, bare):
if bare:
return False
cmd = "%s status --porcelain" % (git_path)
rc, stdout, stderr = module.run_command(cmd, cwd=dest)
lines = stdout.splitlines()
lines = list(filter(lambda c: not re.search('^\\?\\?.*$', c), lines))
return len(lines) > 0
def reset(git_path, module, dest):
'''
Resets the index and working tree to HEAD.
Discards any changes to tracked files in working
tree since that commit.
'''
cmd = "%s reset --hard HEAD" % (git_path,)
return module.run_command(cmd, check_rc=True, cwd=dest)
def get_diff(module, git_path, dest, repo, remote, depth, bare, before, after):
''' Return the difference between 2 versions '''
if before is None:
return {'prepared': '>> Newly checked out %s' % after}
elif before != after:
# Ensure we have the object we are referring to during git diff !
git_version_used = git_version(git_path, module)
fetch(git_path, module, repo, dest, after, remote, depth, bare, '', git_version_used)
cmd = '%s diff %s %s' % (git_path, before, after)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc == 0 and out:
return {'prepared': out}
elif rc == 0:
return {'prepared': '>> No visual differences between %s and %s' % (before, after)}
elif err:
return {'prepared': '>> Failed to get proper diff between %s and %s:\n>> %s' % (before, after, err)}
else:
return {'prepared': '>> Failed to get proper diff between %s and %s' % (before, after)}
return {}
def get_remote_head(git_path, module, dest, version, remote, bare):
cloning = False
cwd = None
tag = False
if remote == module.params['repo']:
cloning = True
elif remote == 'file://' + os.path.expanduser(module.params['repo']):
cloning = True
else:
cwd = dest
if version == 'HEAD':
if cloning:
# cloning the repo, just get the remote's HEAD version
cmd = '%s ls-remote %s -h HEAD' % (git_path, remote)
else:
head_branch = get_head_branch(git_path, module, dest, remote, bare)
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, head_branch)
elif is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
elif is_remote_tag(git_path, module, dest, remote, version):
tag = True
cmd = '%s ls-remote %s -t refs/tags/%s*' % (git_path, remote, version)
else:
# appears to be a sha1. return as-is since it appears
# cannot check for a specific sha1 on remote
return version
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=cwd)
if len(out) < 1:
module.fail_json(msg="Could not determine remote revision for %s" % version, stdout=out, stderr=err, rc=rc)
out = to_native(out)
if tag:
# Find the dereferenced tag if this is an annotated tag.
for tag in out.split('\n'):
if tag.endswith(version + '^{}'):
out = tag
break
elif tag.endswith(version):
out = tag
rev = out.split()[0]
return rev
def is_remote_tag(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if to_native(version, errors='surrogate_or_strict') in out:
return True
else:
return False
def get_branches(git_path, module, dest):
branches = []
cmd = '%s branch --no-color -a' % (git_path,)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine branch data - received %s" % out, stdout=out, stderr=err)
for line in out.split('\n'):
if line.strip():
branches.append(line.strip())
return branches
def get_annotated_tags(git_path, module, dest):
tags = []
cmd = [git_path, 'for-each-ref', 'refs/tags/', '--format', '%(objecttype):%(refname:short)']
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Could not determine tag data - received %s" % out, stdout=out, stderr=err)
for line in to_native(out).split('\n'):
if line.strip():
tagtype, tagname = line.strip().split(':')
if tagtype == 'tag':
tags.append(tagname)
return tags
def is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if to_native(version, errors='surrogate_or_strict') in out:
return True
else:
return False
def is_local_branch(git_path, module, dest, branch):
branches = get_branches(git_path, module, dest)
lbranch = '%s' % branch
if lbranch in branches:
return True
elif '* %s' % branch in branches:
return True
else:
return False
def is_not_a_branch(git_path, module, dest):
branches = get_branches(git_path, module, dest)
for branch in branches:
if branch.startswith('* ') and ('no branch' in branch or 'detached from' in branch or 'detached at' in branch):
return True
return False
def get_repo_path(dest, bare):
if bare:
repo_path = dest
else:
repo_path = os.path.join(dest, '.git')
# Check if the .git is a file. If it is a file, it means that the repository is in external directory respective to the working copy (e.g. we are in a
# submodule structure).
if os.path.isfile(repo_path):
with open(repo_path, 'r') as gitfile:
data = gitfile.read()
ref_prefix, gitdir = data.rstrip().split('gitdir: ', 1)
if ref_prefix:
raise ValueError('.git file has invalid git dir reference format')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
# Use original destination directory with data from .git file.
repo_path = os.path.join(dest, gitdir)
if not os.path.isdir(repo_path):
raise ValueError('%s is not a directory' % repo_path)
return repo_path
def get_head_branch(git_path, module, dest, remote, bare=False):
'''
Determine what branch HEAD is associated with. This is partly
taken from lib/ansible/utils/__init__.py. It finds the correct
path to .git/HEAD and reads from that file the branch that HEAD is
associated with. In the case of a detached HEAD, this will look
up the branch in .git/refs/remotes/<remote>/HEAD.
'''
try:
repo_path = get_repo_path(dest, bare)
except (IOError, ValueError) as err:
# No repo path found
"""``.git`` file does not have a valid format for detached Git dir."""
module.fail_json(
msg='Current repo does not have a valid reference to a '
'separate Git dir or it refers to the invalid path',
details=to_text(err),
)
# Read .git/HEAD for the name of the branch.
# If we're in a detached HEAD state, look up the branch associated with
# the remote HEAD in .git/refs/remotes/<remote>/HEAD
headfile = os.path.join(repo_path, "HEAD")
if is_not_a_branch(git_path, module, dest):
headfile = os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD')
branch = head_splitter(headfile, remote, module=module, fail_on_error=True)
return branch
def get_remote_url(git_path, module, dest, remote):
'''Return URL of remote source for repo.'''
command = [git_path, 'ls-remote', '--get-url', remote]
(rc, out, err) = module.run_command(command, cwd=dest)
if rc != 0:
# There was an issue getting remote URL, most likely
# command is not available in this version of Git.
return None
return to_native(out).rstrip('\n')
def set_remote_url(git_path, module, repo, dest, remote):
''' updates repo from remote sources '''
# Return if remote URL isn't changing.
remote_url = get_remote_url(git_path, module, dest, remote)
if remote_url == repo or unfrackgitpath(remote_url) == unfrackgitpath(repo):
return False
command = [git_path, 'remote', 'set-url', remote, repo]
(rc, out, err) = module.run_command(command, cwd=dest)
if rc != 0:
label = "set a new url %s for %s" % (repo, remote)
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err))
# Return False if remote_url is None to maintain previous behavior
# for Git versions prior to 1.7.5 that lack required functionality.
return remote_url is not None
def fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=False):
''' updates repo from remote sources '''
set_remote_url(git_path, module, repo, dest, remote)
commands = []
fetch_str = 'download remote objects and refs'
fetch_cmd = [git_path, 'fetch']
refspecs = []
if depth:
# try to find the minimal set of refs we need to fetch to get a
# successful checkout
currenthead = get_head_branch(git_path, module, dest, remote)
if refspec:
refspecs.append(refspec)
elif version == 'HEAD':
refspecs.append(currenthead)
elif is_remote_branch(git_path, module, dest, repo, version):
if currenthead != version:
# this workaround is only needed for older git versions
# 1.8.3 is broken, 1.9.x works
# ensure that remote branch is available as both local and remote ref
refspecs.append('+refs/heads/%s:refs/heads/%s' % (version, version))
refspecs.append('+refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version))
elif is_remote_tag(git_path, module, dest, repo, version):
refspecs.append('+refs/tags/' + version + ':refs/tags/' + version)
if refspecs:
# if refspecs is empty, i.e. version is neither heads nor tags
# assume it is a version hash
# fall back to a full clone, otherwise we might not be able to checkout
# version
fetch_cmd.extend(['--depth', str(depth)])
if not depth or not refspecs:
# don't try to be minimalistic but do a full clone
# also do this if depth is given, but version is something that can't be fetched directly
if bare:
refspecs = ['+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*']
else:
# ensure all tags are fetched
if git_version_used >= LooseVersion('1.9'):
fetch_cmd.append('--tags')
else:
# old git versions have a bug in --tags that prevents updating existing tags
commands.append((fetch_str, fetch_cmd + [remote]))
refspecs = ['+refs/tags/*:refs/tags/*']
if refspec:
refspecs.append(refspec)
if force:
fetch_cmd.append('--force')
fetch_cmd.extend([remote])
commands.append((fetch_str, fetch_cmd + refspecs))
for (label, command) in commands:
(rc, out, err) = module.run_command(command, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to %s: %s %s" % (label, out, err), cmd=command)
def submodules_fetch(git_path, module, remote, track_submodules, dest):
changed = False
if not os.path.exists(os.path.join(dest, '.gitmodules')):
# no submodules
return changed
gitmodules_file = open(os.path.join(dest, '.gitmodules'), 'r')
for line in gitmodules_file:
# Check for new submodules
if not changed and line.strip().startswith('path'):
path = line.split('=', 1)[1].strip()
# Check that dest/path/.git exists
if not os.path.exists(os.path.join(dest, path, '.git')):
changed = True
# Check for updates to existing modules
if not changed:
# Fetch updates
begin = get_submodule_versions(git_path, module, dest)
cmd = [git_path, 'submodule', 'foreach', git_path, 'fetch']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to fetch submodules: %s" % out + err)
if track_submodules:
# Compare against submodule HEAD
# FIXME: determine this from .gitmodules
version = 'master'
after = get_submodule_versions(git_path, module, dest, '%s/%s' % (remote, version))
if begin != after:
changed = True
else:
# Compare against the superproject's expectation
cmd = [git_path, 'submodule', 'status']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to retrieve submodule status: %s' % out + err)
for line in out.splitlines():
if line[0] != ' ':
changed = True
break
return changed
def submodule_update(git_path, module, dest, track_submodules, force=False):
''' init and update any submodules '''
# get the valid submodule params
params = get_submodule_update_params(module, git_path, dest)
# skip submodule commands if .gitmodules is not present
if not os.path.exists(os.path.join(dest, '.gitmodules')):
return (0, '', '')
cmd = [git_path, 'submodule', 'sync']
(rc, out, err) = module.run_command(cmd, check_rc=True, cwd=dest)
if 'remote' in params and track_submodules:
cmd = [git_path, 'submodule', 'update', '--init', '--recursive', '--remote']
else:
cmd = [git_path, 'submodule', 'update', '--init', '--recursive']
if force:
cmd.append('--force')
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to init/update submodules: %s" % out + err)
return (rc, out, err)
def set_remote_branch(git_path, module, dest, remote, version, depth):
"""set refs for the remote branch version
This assumes the branch does not yet exist locally and is therefore also not checked out.
Can't use git remote set-branches, as it is not available in git 1.7.1 (centos6)
"""
branchref = "+refs/heads/%s:refs/heads/%s" % (version, version)
branchref += ' +refs/heads/%s:refs/remotes/%s/%s' % (version, remote, version)
cmd = "%s fetch --depth=%s %s %s" % (git_path, depth, remote, branchref)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to fetch branch from remote: %s" % version, stdout=out, stderr=err, rc=rc)
def switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist):
cmd = ''
if version == 'HEAD':
branch = get_head_branch(git_path, module, dest, remote)
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % branch,
stdout=out, stderr=err, rc=rc)
cmd = "%s reset --hard %s/%s --" % (git_path, remote, branch)
else:
# FIXME check for local_branch first, should have been fetched already
if is_remote_branch(git_path, module, dest, remote, version):
if depth and not is_local_branch(git_path, module, dest, version):
# git clone --depth implies --single-branch, which makes
# the checkout fail if the version changes
# fetch the remote branch, to be able to check it out next
set_remote_branch(git_path, module, dest, remote, version, depth)
if not is_local_branch(git_path, module, dest, version):
cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version)
else:
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version), cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % version, stdout=out, stderr=err, rc=rc)
cmd = "%s reset --hard %s/%s" % (git_path, remote, version)
else:
cmd = "%s checkout --force %s" % (git_path, version)
(rc, out1, err1) = module.run_command(cmd, cwd=dest)
if rc != 0:
if version != 'HEAD':
module.fail_json(msg="Failed to checkout %s" % (version),
stdout=out1, stderr=err1, rc=rc, cmd=cmd)
else:
module.fail_json(msg="Failed to checkout branch %s" % (branch),
stdout=out1, stderr=err1, rc=rc, cmd=cmd)
if verify_commit:
verify_commit_sign(git_path, module, dest, version, gpg_whitelist)
return (rc, out1, err1)
def verify_commit_sign(git_path, module, dest, version, gpg_whitelist):
if version in get_annotated_tags(git_path, module, dest):
git_sub = "verify-tag"
else:
git_sub = "verify-commit"
cmd = "%s %s %s" % (git_path, git_sub, version)
if gpg_whitelist:
cmd += " --raw"
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg='Failed to verify GPG signature of commit/tag "%s"' % version, stdout=out, stderr=err, rc=rc)
if gpg_whitelist:
fingerprint = get_gpg_fingerprint(err)
if fingerprint not in gpg_whitelist:
module.fail_json(msg='The gpg_whitelist does not include the public key "%s" for this commit' % fingerprint, stdout=out, stderr=err, rc=rc)
return (rc, out, err)
def get_gpg_fingerprint(output):
"""Return a fingerprint of the primary key.
Ref:
https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS;hb=HEAD#l482
"""
for line in output.splitlines():
data = line.split()
if data[1] != 'VALIDSIG':
continue
# if signed with a subkey, this contains the primary key fingerprint
data_id = 11 if len(data) == 11 else 2
return data[data_id]
def git_version(git_path, module):
"""return the installed version of git"""
cmd = "%s --version" % git_path
(rc, out, err) = module.run_command(cmd)
if rc != 0:
# one could fail_json here, but the version info is not that important,
# so let's try to fail only on actual git commands
return None
rematch = re.search('git version (.*)$', to_native(out))
if not rematch:
return None
return LooseVersion(rematch.groups()[0])
def git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version):
""" Create git archive in given source directory """
cmd = [git_path, 'archive', '--format', archive_fmt, '--output', archive, version]
if archive_prefix is not None:
cmd.insert(-1, '--prefix')
cmd.insert(-1, archive_prefix)
(rc, out, err) = module.run_command(cmd, cwd=dest)
if rc != 0:
module.fail_json(msg="Failed to perform archive operation",
details="Git archive command failed to create "
"archive %s using %s directory."
"Error: %s" % (archive, dest, err))
return rc, out, err
def create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result):
""" Helper function for creating archive using git_archive """
all_archive_fmt = {'.zip': 'zip', '.gz': 'tar.gz', '.tar': 'tar',
'.tgz': 'tgz'}
_, archive_ext = os.path.splitext(archive)
archive_fmt = all_archive_fmt.get(archive_ext, None)
if archive_fmt is None:
module.fail_json(msg="Unable to get file extension from "
"archive file name : %s" % archive,
details="Please specify archive as filename with "
"extension. File extension can be one "
"of ['tar', 'tar.gz', 'zip', 'tgz']")
repo_name = repo.split("/")[-1].replace(".git", "")
if os.path.exists(archive):
# If git archive file exists, then compare it with new git archive file.
# if match, do nothing
# if does not match, then replace existing with temp archive file.
tempdir = tempfile.mkdtemp()
new_archive_dest = os.path.join(tempdir, repo_name)
new_archive = new_archive_dest + '.' + archive_fmt
git_archive(git_path, module, dest, new_archive, archive_fmt, archive_prefix, version)
# filecmp is supposed to be efficient than md5sum checksum
if filecmp.cmp(new_archive, archive):
result.update(changed=False)
# Cleanup before exiting
try:
shutil.rmtree(tempdir)
except OSError:
pass
else:
try:
shutil.move(new_archive, archive)
shutil.rmtree(tempdir)
result.update(changed=True)
except OSError as e:
module.fail_json(msg="Failed to move %s to %s" %
(new_archive, archive),
details=u"Error occurred while moving : %s"
% to_text(e))
else:
# Perform archive from local directory
git_archive(git_path, module, dest, archive, archive_fmt, archive_prefix, version)
result.update(changed=True)
# ===========================================
def main():
module = AnsibleModule(
argument_spec=dict(
dest=dict(type='path'),
repo=dict(required=True, aliases=['name']),
version=dict(default='HEAD'),
remote=dict(default='origin'),
refspec=dict(default=None),
reference=dict(default=None),
force=dict(default='no', type='bool'),
depth=dict(default=None, type='int'),
clone=dict(default='yes', type='bool'),
update=dict(default='yes', type='bool'),
verify_commit=dict(default='no', type='bool'),
gpg_whitelist=dict(default=[], type='list', elements='str'),
accept_hostkey=dict(default='no', type='bool'),
accept_newhostkey=dict(default='no', type='bool'),
key_file=dict(default=None, type='path', required=False),
ssh_opts=dict(default=None, required=False),
executable=dict(default=None, type='path'),
bare=dict(default='no', type='bool'),
recursive=dict(default='yes', type='bool'),
single_branch=dict(default=False, type='bool'),
track_submodules=dict(default='no', type='bool'),
umask=dict(default=None, type='raw'),
archive=dict(type='path'),
archive_prefix=dict(),
separate_git_dir=dict(type='path'),
),
mutually_exclusive=[('separate_git_dir', 'bare'), ('accept_hostkey', 'accept_newhostkey')],
required_by={'archive_prefix': ['archive']},
supports_check_mode=True
)
dest = module.params['dest']
repo = module.params['repo']
version = module.params['version']
remote = module.params['remote']
refspec = module.params['refspec']
force = module.params['force']
depth = module.params['depth']
update = module.params['update']
allow_clone = module.params['clone']
bare = module.params['bare']
verify_commit = module.params['verify_commit']
gpg_whitelist = module.params['gpg_whitelist']
reference = module.params['reference']
single_branch = module.params['single_branch']
git_path = module.params['executable'] or module.get_bin_path('git', True)
key_file = module.params['key_file']
ssh_opts = module.params['ssh_opts']
umask = module.params['umask']
archive = module.params['archive']
archive_prefix = module.params['archive_prefix']
separate_git_dir = module.params['separate_git_dir']
result = dict(changed=False, warnings=list())
if module.params['accept_hostkey']:
if ssh_opts is not None:
if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts):
ssh_opts += " -o StrictHostKeyChecking=no"
else:
ssh_opts = "-o StrictHostKeyChecking=no"
if module.params['accept_newhostkey']:
if not ssh_supports_acceptnewhostkey(module):
module.warn("Your ssh client does not support accept_newhostkey option, therefore it cannot be used.")
else:
if ssh_opts is not None:
if ("-o StrictHostKeyChecking=no" not in ssh_opts) and ("-o StrictHostKeyChecking=accept-new" not in ssh_opts):
ssh_opts += " -o StrictHostKeyChecking=accept-new"
else:
ssh_opts = "-o StrictHostKeyChecking=accept-new"
# evaluate and set the umask before doing anything else
if umask is not None:
if not isinstance(umask, string_types):
module.fail_json(msg="umask must be defined as a quoted octal integer")
try:
umask = int(umask, 8)
except Exception:
module.fail_json(msg="umask must be an octal integer",
details=to_text(sys.exc_info()[1]))
os.umask(umask)
# Certain features such as depth require a file:/// protocol for path based urls
# so force a protocol here ...
if os.path.expanduser(repo).startswith('/'):
repo = 'file://' + os.path.expanduser(repo)
# We screenscrape a huge amount of git commands so use C locale anytime we
# call run_command()
locale = get_best_parsable_locale(module)
module.run_command_environ_update = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LC_CTYPE=locale)
if separate_git_dir:
separate_git_dir = os.path.realpath(separate_git_dir)
gitconfig = None
if not dest and allow_clone:
module.fail_json(msg="the destination directory must be specified unless clone=no")
elif dest:
dest = os.path.abspath(dest)
try:
repo_path = get_repo_path(dest, bare)
if separate_git_dir and os.path.exists(repo_path) and separate_git_dir != repo_path:
result['changed'] = True
if not module.check_mode:
relocate_repo(module, result, separate_git_dir, repo_path, dest)
repo_path = separate_git_dir
except (IOError, ValueError) as err:
# No repo path found
"""``.git`` file does not have a valid format for detached Git dir."""
module.fail_json(
msg='Current repo does not have a valid reference to a '
'separate Git dir or it refers to the invalid path',
details=to_text(err),
)
gitconfig = os.path.join(repo_path, 'config')
# iface changes so need it to make decisions
git_version_used = git_version(git_path, module)
# GIT_SSH=<path> as an environment variable, might create sh wrapper script for older versions.
set_git_ssh_env(key_file, ssh_opts, git_version_used, module)
if depth is not None and git_version_used < LooseVersion('1.9.1'):
module.warn("git version is too old to fully support the depth argument. Falling back to full checkouts.")
depth = None
recursive = module.params['recursive']
track_submodules = module.params['track_submodules']
result.update(before=None)
local_mods = False
if (dest and not os.path.exists(gitconfig)) or (not dest and not allow_clone):
# if there is no git configuration, do a clone operation unless:
# * the user requested no clone (they just want info)
# * we're doing a check mode test
# In those cases we do an ls-remote
if module.check_mode or not allow_clone:
remote_head = get_remote_head(git_path, module, dest, version, repo, bare)
result.update(changed=True, after=remote_head)
if module._diff:
diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after'])
if diff:
result['diff'] = diff
module.exit_json(**result)
# there's no git config, so clone
clone(git_path, module, repo, dest, remote, depth, version, bare, reference,
refspec, git_version_used, verify_commit, separate_git_dir, result, gpg_whitelist, single_branch)
elif not update:
# Just return having found a repo already in the dest path
# this does no checking that the repo is the actual repo
# requested.
result['before'] = get_version(module, git_path, dest)
result.update(after=result['before'])
if archive:
# Git archive is not supported by all git servers, so
# we will first clone and perform git archive from local directory
if module.check_mode:
result.update(changed=True)
module.exit_json(**result)
create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result)
module.exit_json(**result)
else:
# else do a pull
local_mods = has_local_mods(module, git_path, dest, bare)
result['before'] = get_version(module, git_path, dest)
if local_mods:
# failure should happen regardless of check mode
if not force:
module.fail_json(msg="Local modifications exist in the destination: " + dest + " (force=no).", **result)
# if force and in non-check mode, do a reset
if not module.check_mode:
reset(git_path, module, dest)
result.update(changed=True, msg='Local modifications exist in the destination: ' + dest)
# exit if already at desired sha version
if module.check_mode:
remote_url = get_remote_url(git_path, module, dest, remote)
remote_url_changed = remote_url and remote_url != repo and unfrackgitpath(remote_url) != unfrackgitpath(repo)
else:
remote_url_changed = set_remote_url(git_path, module, repo, dest, remote)
result.update(remote_url_changed=remote_url_changed)
if module.check_mode:
remote_head = get_remote_head(git_path, module, dest, version, remote, bare)
result.update(changed=(result['before'] != remote_head or remote_url_changed), after=remote_head)
# FIXME: This diff should fail since the new remote_head is not fetched yet?!
if module._diff:
diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after'])
if diff:
result['diff'] = diff
module.exit_json(**result)
else:
fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec, git_version_used, force=force)
result['after'] = get_version(module, git_path, dest)
# switch to version specified regardless of whether
# we got new revisions from the repository
if not bare:
switch_version(git_path, module, dest, remote, version, verify_commit, depth, gpg_whitelist)
# Deal with submodules
submodules_updated = False
if recursive and not bare:
submodules_updated = submodules_fetch(git_path, module, remote, track_submodules, dest)
if submodules_updated:
result.update(submodules_changed=submodules_updated)
if module.check_mode:
result.update(changed=True, after=remote_head)
module.exit_json(**result)
# Switch to version specified
submodule_update(git_path, module, dest, track_submodules, force=force)
# determine if we changed anything
result['after'] = get_version(module, git_path, dest)
if result['before'] != result['after'] or local_mods or submodules_updated or remote_url_changed:
result.update(changed=True)
if module._diff:
diff = get_diff(module, git_path, dest, repo, remote, depth, bare, result['before'], result['after'])
if diff:
result['diff'] = diff
if archive:
# Git archive is not supported by all git servers, so
# we will first clone and perform git archive from local directory
if module.check_mode:
result.update(changed=True)
module.exit_json(**result)
create_archive(git_path, module, dest, archive, archive_prefix, version, repo, result)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,618 |
Missing `f` prefix on f-strings
|
Some strings looks like they're meant to be f-strings but are missing the `f` prefix meaning variable interpolation won't happen.
https://github.com/ansible/ansible/blob/d06d99cbb6a61db043436092663de8b20f6d2017/lib/ansible/cli/galaxy.py#L154
I found this issue automatically. I'm a bot. Beep Boop 🦊. See other issues I found in your repo [here](https://codereview.doctor/ansible/ansible)
|
https://github.com/ansible/ansible/issues/77618
|
https://github.com/ansible/ansible/pull/77619
|
841bdb74eb3181f50b45ad62796ca7c4586f6790
|
578a815271a9a44a1f7d69888d6afef49e658d96
| 2022-04-23T22:14:53Z |
python
| 2022-04-25T13:37:23Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import re
import shutil
import sys
import textwrap
import time
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('v3', False, 'bool'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
]
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
artifacts_manager_kwargs = {'validate_certs': not context.CLIARGS['ignore_certs']}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError("{value} is not a valid signature count value")
return value
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self._api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=AnsibleCollectionConfig.collection_paths,
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
if self._implicit_role and ('-r' in self._raw_args or '--role-file' in self._raw_args):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
validate_certs_fallback = not context.CLIARGS['ignore_certs']
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
client_id = server_options.pop('client_id', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
available_api_versions = None
v3 = server_options.pop('v3', None)
validate_certs = server_options['validate_certs']
if validate_certs is None:
validate_certs = validate_certs_fallback
server_options['validate_certs'] = validate_certs
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs_fallback,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs_fallback,
**galaxy_options
))
return context.CLIARGS['func']()
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=not context.CLIARGS['ignore_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection will not be picked up in an Ansible "
"run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = (role.metadata.get('dependencies') or []) + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
output_format = context.CLIARGS['output_format']
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = AnsibleCollectionConfig.collection_paths
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
try:
collection = Requirement.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
six.raise_from(AnsibleError(val_err), val_err)
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver}
}
continue
fqcn_width, version_width = _get_collection_widths([collection])
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = list(find_existing_collections(
collection_path, artifacts_manager,
))
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
for collection in sorted(collections, key=to_text):
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,684 |
'"include" is deprecated, use include_tasks/import_tasks instead.' deprecation warning is without link to place explaining how to deal with it
|
### Summary
I noticed
> [DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks instead. This feature will be removed in version 2.16.
Sadly, this deprecation warning is without link to documentation explaining how it should be fixed
I went to changelog: but https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG.rst is not really helping with finding documentation
searching `import_tasks include_tasks difference ansible` on internet found finally https://serverfault.com/questions/875247/whats-the-difference-between-include-tasks-and-import-tasks
Every single person using Ansible will need to search for about 2 to 10 minutes - multiply it by number of Ansible users and there is quite big waste of time.
If there is a good reason to deprecate something - please link to documentation explaining how to fix code that worked fine so far.
Or maybe at least mention which version is equivalent to the current behaviour.
(ideally there would be `ansible undeprecate_include` which would edit all instances of `include` to whichever version is matching current one and `ansible undeprecate_include include_tasks` which would use `include_tasks` and `ansible undeprecate_include import_tasks`)
### Issue Type
Documentation Report
### Component Name
ansible (sorry)
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mateusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/mateusz/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(no visible output)
```
### OS / Environment
Lubuntu 20.04
### Additional Information
No googling needed to find docs, lower risk of finding misleading docs
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76684
|
https://github.com/ansible/ansible/pull/77599
|
578a815271a9a44a1f7d69888d6afef49e658d96
|
1802684c7b94cd323e573c5b89cc77e4e1552c83
| 2022-01-08T08:24:10Z |
python
| 2022-04-25T15:44:31Z |
changelogs/fragments/77599-add-url-include-deprecation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,684 |
'"include" is deprecated, use include_tasks/import_tasks instead.' deprecation warning is without link to place explaining how to deal with it
|
### Summary
I noticed
> [DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks instead. This feature will be removed in version 2.16.
Sadly, this deprecation warning is without link to documentation explaining how it should be fixed
I went to changelog: but https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG.rst is not really helping with finding documentation
searching `import_tasks include_tasks difference ansible` on internet found finally https://serverfault.com/questions/875247/whats-the-difference-between-include-tasks-and-import-tasks
Every single person using Ansible will need to search for about 2 to 10 minutes - multiply it by number of Ansible users and there is quite big waste of time.
If there is a good reason to deprecate something - please link to documentation explaining how to fix code that worked fine so far.
Or maybe at least mention which version is equivalent to the current behaviour.
(ideally there would be `ansible undeprecate_include` which would edit all instances of `include` to whichever version is matching current one and `ansible undeprecate_include include_tasks` which would use `include_tasks` and `ansible undeprecate_include import_tasks`)
### Issue Type
Documentation Report
### Component Name
ansible (sorry)
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mateusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/mateusz/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(no visible output)
```
### OS / Environment
Lubuntu 20.04
### Additional Information
No googling needed to find docs, lower risk of finding misleading docs
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76684
|
https://github.com/ansible/ansible/pull/77599
|
578a815271a9a44a1f7d69888d6afef49e658d96
|
1802684c7b94cd323e573c5b89cc77e4e1552c83
| 2022-01-08T08:24:10Z |
python
| 2022-04-25T15:44:31Z |
lib/ansible/playbook/helpers.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.module_utils.six import string_types
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.utils.display import Display
display = Display()
def load_list_of_blocks(ds, play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
'''
Given a list of mixed task/block data (parsed from YAML),
return a list of Block() objects, where implicit blocks
are created for each bare Task.
'''
# we import here to prevent a circular dependency with imports
from ansible.playbook.block import Block
if not isinstance(ds, (list, type(None))):
raise AnsibleAssertionError('%s should be a list or None but is %s' % (ds, type(ds)))
block_list = []
if ds:
count = iter(range(len(ds)))
for i in count:
block_ds = ds[i]
# Implicit blocks are created by bare tasks listed in a play without
# an explicit block statement. If we have two implicit blocks in a row,
# squash them down to a single block to save processing time later.
implicit_blocks = []
while block_ds is not None and not Block.is_block(block_ds):
implicit_blocks.append(block_ds)
i += 1
# Advance the iterator, so we don't repeat
next(count, None)
try:
block_ds = ds[i]
except IndexError:
block_ds = None
# Loop both implicit blocks and block_ds as block_ds is the next in the list
for b in (implicit_blocks, block_ds):
if b:
block_list.append(
Block.load(
b,
play=play,
parent_block=parent_block,
role=role,
task_include=task_include,
use_handlers=use_handlers,
variable_manager=variable_manager,
loader=loader,
)
)
return block_list
def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
'''
Given a list of task datastructures (parsed from YAML),
return a list of Task() or TaskInclude() objects.
'''
# we import here to prevent a circular dependency with imports
from ansible.playbook.block import Block
from ansible.playbook.handler import Handler
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.playbook.handler_task_include import HandlerTaskInclude
from ansible.template import Templar
if not isinstance(ds, list):
raise AnsibleAssertionError('The ds (%s) should be a list but was a %s' % (ds, type(ds)))
task_list = []
for task_ds in ds:
if not isinstance(task_ds, dict):
raise AnsibleAssertionError('The ds (%s) should be a dict but was a %s' % (ds, type(ds)))
if 'block' in task_ds:
t = Block.load(
task_ds,
play=play,
parent_block=block,
role=role,
task_include=task_include,
use_handlers=use_handlers,
variable_manager=variable_manager,
loader=loader,
)
task_list.append(t)
else:
args_parser = ModuleArgsParser(task_ds)
try:
(action, args, delegate_to) = args_parser.parse(skip_action_validation=True)
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e.obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=task_ds, orig_exc=e)
if action in C._ACTION_ALL_INCLUDE_IMPORT_TASKS:
if use_handlers:
include_class = HandlerTaskInclude
else:
include_class = TaskInclude
t = include_class.load(
task_ds,
block=block,
role=role,
task_include=None,
variable_manager=variable_manager,
loader=loader
)
all_vars = variable_manager.get_vars(play=play, task=t)
templar = Templar(loader=loader, variables=all_vars)
# check to see if this include is dynamic or static:
# 1. the user has set the 'static' option to false or true
# 2. one of the appropriate config options was set
if action in C._ACTION_INCLUDE_TASKS:
is_static = False
elif action in C._ACTION_IMPORT_TASKS:
is_static = True
else:
display.deprecated('"include" is deprecated, use include_tasks/import_tasks instead', "2.16")
is_static = not templar.is_template(t.args['_raw_params']) and t.all_parents_static() and not t.loop
if is_static:
if t.loop is not None:
if action in C._ACTION_IMPORT_TASKS:
raise AnsibleParserError("You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead.", obj=task_ds)
else:
raise AnsibleParserError("You cannot use 'static' on an include with a loop", obj=task_ds)
# we set a flag to indicate this include was static
t.statically_loaded = True
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = block
cumulative_path = None
found = False
subdir = 'tasks'
if use_handlers:
subdir = 'handlers'
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleUndefinedVariable as e:
if not parent_include.statically_loaded:
raise AnsibleParserError(
"Error when evaluating variable in dynamic parent include path: %s. "
"When using static imports, the parent dynamic include cannot utilize host facts "
"or variables from inventory" % parent_include.args.get('_raw_params'),
obj=task_ds,
suppress_extended_error=True,
orig_exc=e
)
raise
if cumulative_path is None:
cumulative_path = parent_include_dir
elif not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
include_target = templar.template(t.args['_raw_params'])
if t._role:
new_basedir = os.path.join(t._role._role_path, subdir, cumulative_path)
include_file = loader.path_dwim_relative(new_basedir, subdir, include_target)
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
found = True
break
else:
parent_include = parent_include._parent
if not found:
try:
include_target = templar.template(t.args['_raw_params'])
except AnsibleUndefinedVariable as e:
raise AnsibleParserError(
"Error when evaluating variable in import path: %s.\n\n"
"When using static imports, ensure that any variables used in their names are defined in vars/vars_files\n"
"or extra-vars passed in from the command line. Static imports cannot use variables from facts or inventory\n"
"sources like group or host vars." % t.args['_raw_params'],
obj=task_ds,
suppress_extended_error=True,
orig_exc=e)
if t._role:
include_file = loader.path_dwim_relative(t._role._role_path, subdir, include_target)
else:
include_file = loader.path_dwim(include_target)
data = loader.load_from_file(include_file)
if not data:
display.warning('file %s is empty and had no tasks to include' % include_file)
continue
elif not isinstance(data, list):
raise AnsibleParserError("included task files must contain a list of tasks", obj=data)
# since we can't send callbacks here, we display a message directly in
# the same fashion used by the on_include callback. We also do it here,
# because the recursive nature of helper methods means we may be loading
# nested includes, and we want the include order printed correctly
display.vv("statically imported: %s" % include_file)
ti_copy = t.copy(exclude_parent=True)
ti_copy._parent = block
included_blocks = load_list_of_blocks(
data,
play=play,
parent_block=None,
task_include=ti_copy,
role=role,
use_handlers=use_handlers,
loader=loader,
variable_manager=variable_manager,
)
tags = ti_copy.tags[:]
# now we extend the tags on each of the included blocks
for b in included_blocks:
b.tags = list(set(b.tags).union(tags))
# END FIXME
# FIXME: handlers shouldn't need this special handling, but do
# right now because they don't iterate blocks correctly
if use_handlers:
for b in included_blocks:
task_list.extend(b.block)
else:
task_list.extend(included_blocks)
else:
t.is_static = False
task_list.append(t)
elif action in C._ACTION_ALL_PROPER_INCLUDE_IMPORT_ROLES:
ir = IncludeRole.load(
task_ds,
block=block,
role=role,
task_include=None,
variable_manager=variable_manager,
loader=loader,
)
# 1. the user has set the 'static' option to false or true
# 2. one of the appropriate config options was set
is_static = False
if action in C._ACTION_IMPORT_ROLE:
is_static = True
if is_static:
if ir.loop is not None:
if action in C._ACTION_IMPORT_ROLE:
raise AnsibleParserError("You cannot use loops on 'import_role' statements. You should use 'include_role' instead.", obj=task_ds)
else:
raise AnsibleParserError("You cannot use 'static' on an include_role with a loop", obj=task_ds)
# we set a flag to indicate this include was static
ir.statically_loaded = True
# template the role name now, if needed
all_vars = variable_manager.get_vars(play=play, task=ir)
templar = Templar(loader=loader, variables=all_vars)
ir._role_name = templar.template(ir._role_name)
# uses compiled list from object
blocks, _ = ir.get_block_list(variable_manager=variable_manager, loader=loader)
task_list.extend(blocks)
else:
# passes task object itself for latter generation of list
task_list.append(ir)
else:
if use_handlers:
t = Handler.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
else:
t = Task.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
task_list.append(t)
return task_list
def load_list_of_roles(ds, play, current_role_path=None, variable_manager=None, loader=None, collection_search_list=None):
"""
Loads and returns a list of RoleInclude objects from the ds list of role definitions
:param ds: list of roles to load
:param play: calling Play object
:param current_role_path: path of the owning role, if any
:param variable_manager: varmgr to use for templating
:param loader: loader to use for DS parsing/services
:param collection_search_list: list of collections to search for unqualified role names
:return:
"""
# we import here to prevent a circular dependency with imports
from ansible.playbook.role.include import RoleInclude
if not isinstance(ds, list):
raise AnsibleAssertionError('ds (%s) should be a list but was a %s' % (ds, type(ds)))
roles = []
for role_def in ds:
i = RoleInclude.load(role_def, play=play, current_role_path=current_role_path, variable_manager=variable_manager,
loader=loader, collection_list=collection_search_list)
roles.append(i)
return roles
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,535 |
jinja parsing of lookup depends on input
|
### Summary
Ansible is attempting to jinja template the return from a lookup if something in the lookup is a variable itself. But if nothing in the lookup is a template it's fine with not jinja templating the result of the lookup.
Do you have an explanation for the below-described behavior?
### Issue Type
Bug Report
### Component Name
lookup
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.4]
config file = /home/myuser/.ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/devenv-5/lib/python3.9/site-packages/ansible
ansible collection location = /home/myuser/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/devenv-5/bin/ansible
python version = 3.9.12 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/home/myuser/.ansible.cfg) = /home/myuser/ansible.log
DEFAULT_STDOUT_CALLBACK(/home/myuser/.ansible.cfg) = yaml
RETRY_FILES_ENABLED(/home/myuser/.ansible.cfg) = False
```
### OS / Environment
Fedora 35
### Steps to Reproduce
I was able to replicate this with the following playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test some kind of weird lookup thing
hosts: localhost
gather_facts: False
vars:
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
tasks:
- name: Get file content
set_fact:
file_content: "{{ lookup('file', file_name) }}"
- name: Print content
debug:
msg: "{{ content }}"
vars:
content: "{{ file_content }}"
```
Where lookup_test.yml.txt file was:
```
some_content: "---\nusername: \"{{ foobar }}\""
```
And I get the same results you saw where, if I hard code the file name in the playbook, it works fine:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
ok: [localhost] => {
"msg": "some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
But if I remove the hard coded file name and make it a variable by changing the line:
```
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
```
to
```
file_name: "{{ my_file }}"
```
It now fails:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Expected Results
The file's content shout be printed as:
```
"some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
```
### Actual Results
```console
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77535
|
https://github.com/ansible/ansible/pull/77609
|
6fdec4a6ab140462e86f2f0618a48243169fb2c9
|
3980eb8c09d170a861351f8aff4a1aa1a8cbb626
| 2022-04-14T09:45:06Z |
python
| 2022-04-26T15:16:22Z |
changelogs/fragments/77535-prevent-losing-unsafe-lookups.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,535 |
jinja parsing of lookup depends on input
|
### Summary
Ansible is attempting to jinja template the return from a lookup if something in the lookup is a variable itself. But if nothing in the lookup is a template it's fine with not jinja templating the result of the lookup.
Do you have an explanation for the below-described behavior?
### Issue Type
Bug Report
### Component Name
lookup
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.4]
config file = /home/myuser/.ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/devenv-5/lib/python3.9/site-packages/ansible
ansible collection location = /home/myuser/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/devenv-5/bin/ansible
python version = 3.9.12 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/home/myuser/.ansible.cfg) = /home/myuser/ansible.log
DEFAULT_STDOUT_CALLBACK(/home/myuser/.ansible.cfg) = yaml
RETRY_FILES_ENABLED(/home/myuser/.ansible.cfg) = False
```
### OS / Environment
Fedora 35
### Steps to Reproduce
I was able to replicate this with the following playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test some kind of weird lookup thing
hosts: localhost
gather_facts: False
vars:
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
tasks:
- name: Get file content
set_fact:
file_content: "{{ lookup('file', file_name) }}"
- name: Print content
debug:
msg: "{{ content }}"
vars:
content: "{{ file_content }}"
```
Where lookup_test.yml.txt file was:
```
some_content: "---\nusername: \"{{ foobar }}\""
```
And I get the same results you saw where, if I hard code the file name in the playbook, it works fine:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
ok: [localhost] => {
"msg": "some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
But if I remove the hard coded file name and make it a variable by changing the line:
```
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
```
to
```
file_name: "{{ my_file }}"
```
It now fails:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Expected Results
The file's content shout be printed as:
```
"some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
```
### Actual Results
```console
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77535
|
https://github.com/ansible/ansible/pull/77609
|
6fdec4a6ab140462e86f2f0618a48243169fb2c9
|
3980eb8c09d170a861351f8aff4a1aa1a8cbb626
| 2022-04-14T09:45:06Z |
python
| 2022-04-26T15:16:22Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from collections.abc import Iterator, Sequence, Mapping, MappingView, MutableMapping
from contextlib import contextmanager
from hashlib import sha1
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a templating
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
original_key = key
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
key, leaf_key = get_fqcr_and_name(key)
seen = set()
while True:
if key in seen:
raise TemplateSyntaxError(
'recursive collection redirect found for %r' % original_key,
0
)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect = routing_entry.get('redirect', None)
if redirect:
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self._dirname, acr.collection, acr.resource, next_key))
key = next_key
else:
break
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
try:
method_map = getattr(plugin_impl, self._method_map_name)
func_items = method_map().items()
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
@_unroll_iterator
def _ansible_finalize(thing):
"""A custom finalize function for jinja2, which prevents None from being
returned. This avoids a string of ``"None"`` as ``None`` has no
importance in YAML.
The function is decorated with ``_unroll_iterator`` so that users are not
required to explicitly use ``|list`` to unroll a generator. This only
affects the scenario where the final result of templating
is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters
which can produce a generator in the middle of a template are already
wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``.
"""
return thing if thing is not None else ''
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
concat = staticmethod(ansible_eval_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
self.trim_blocks = True
self.undefined = AnsibleUndefined
self.finalize = _ansible_finalize
class AnsibleNativeEnvironment(AnsibleEnvironment):
concat = staticmethod(ansible_native_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.finalize = _unroll_iterator(lambda thing: thing)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
extensions=self._get_extensions(),
loader=FileSystemLoader(loader.get_basedir() if loader else '.'),
)
self.environment.template_class.environment_class = environment_class
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
if ':' not in pair:
raise AnsibleError("failed to parse jinja2 override '%s'."
" Did you use something different from colon as key-value separator?" % pair.strip())
(key, val) = pair.split(':', 1)
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
if not self.jinja2_native and not convert_data:
res = ansible_concat(rf)
else:
res = self.environment.concat(rf)
unsafe = getattr(new_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,535 |
jinja parsing of lookup depends on input
|
### Summary
Ansible is attempting to jinja template the return from a lookup if something in the lookup is a variable itself. But if nothing in the lookup is a template it's fine with not jinja templating the result of the lookup.
Do you have an explanation for the below-described behavior?
### Issue Type
Bug Report
### Component Name
lookup
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.4]
config file = /home/myuser/.ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/devenv-5/lib/python3.9/site-packages/ansible
ansible collection location = /home/myuser/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/devenv-5/bin/ansible
python version = 3.9.12 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/home/myuser/.ansible.cfg) = /home/myuser/ansible.log
DEFAULT_STDOUT_CALLBACK(/home/myuser/.ansible.cfg) = yaml
RETRY_FILES_ENABLED(/home/myuser/.ansible.cfg) = False
```
### OS / Environment
Fedora 35
### Steps to Reproduce
I was able to replicate this with the following playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test some kind of weird lookup thing
hosts: localhost
gather_facts: False
vars:
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
tasks:
- name: Get file content
set_fact:
file_content: "{{ lookup('file', file_name) }}"
- name: Print content
debug:
msg: "{{ content }}"
vars:
content: "{{ file_content }}"
```
Where lookup_test.yml.txt file was:
```
some_content: "---\nusername: \"{{ foobar }}\""
```
And I get the same results you saw where, if I hard code the file name in the playbook, it works fine:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
ok: [localhost] => {
"msg": "some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
But if I remove the hard coded file name and make it a variable by changing the line:
```
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
```
to
```
file_name: "{{ my_file }}"
```
It now fails:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Expected Results
The file's content shout be printed as:
```
"some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
```
### Actual Results
```console
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77535
|
https://github.com/ansible/ansible/pull/77609
|
6fdec4a6ab140462e86f2f0618a48243169fb2c9
|
3980eb8c09d170a861351f8aff4a1aa1a8cbb626
| 2022-04-14T09:45:06Z |
python
| 2022-04-26T15:16:22Z |
test/integration/targets/template/unsafe.yml
|
- hosts: localhost
gather_facts: false
vars:
nottemplated: this should not be seen
imunsafe: !unsafe '{{ nottemplated }}'
tasks:
- set_fact:
this_was_unsafe: >
{{ imunsafe }}
- set_fact:
this_always_safe: '{{ imunsafe }}'
- name: ensure nothing was templated
assert:
that:
- this_always_safe == imunsafe
- imunsafe == this_was_unsafe.strip()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,535 |
jinja parsing of lookup depends on input
|
### Summary
Ansible is attempting to jinja template the return from a lookup if something in the lookup is a variable itself. But if nothing in the lookup is a template it's fine with not jinja templating the result of the lookup.
Do you have an explanation for the below-described behavior?
### Issue Type
Bug Report
### Component Name
lookup
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.4]
config file = /home/myuser/.ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/devenv-5/lib/python3.9/site-packages/ansible
ansible collection location = /home/myuser/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/devenv-5/bin/ansible
python version = 3.9.12 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/home/myuser/.ansible.cfg) = /home/myuser/ansible.log
DEFAULT_STDOUT_CALLBACK(/home/myuser/.ansible.cfg) = yaml
RETRY_FILES_ENABLED(/home/myuser/.ansible.cfg) = False
```
### OS / Environment
Fedora 35
### Steps to Reproduce
I was able to replicate this with the following playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test some kind of weird lookup thing
hosts: localhost
gather_facts: False
vars:
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
tasks:
- name: Get file content
set_fact:
file_content: "{{ lookup('file', file_name) }}"
- name: Print content
debug:
msg: "{{ content }}"
vars:
content: "{{ file_content }}"
```
Where lookup_test.yml.txt file was:
```
some_content: "---\nusername: \"{{ foobar }}\""
```
And I get the same results you saw where, if I hard code the file name in the playbook, it works fine:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
ok: [localhost] => {
"msg": "some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
But if I remove the hard coded file name and make it a variable by changing the line:
```
file_name: "lookup_test.yml.txt" #"{{ my_file }}"
```
to
```
file_name: "{{ my_file }}"
```
It now fails:
```
$ ansible-playbook lookup_test.yml -e my_file=lookup_test.yml.txt
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test some kind of weird lookup thing] *********************************************************************************************************
TASK [Get file content] *****************************************************************************************************************************
ok: [localhost]
TASK [Print content] ********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Expected Results
The file's content shout be printed as:
```
"some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\""
```
### Actual Results
```console
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: {{ file_content }}: some_content: \"---\\nusername: \\\"{{ foobar }}\\\"\": 'foobar' is undefined\n\nThe error appears to be in '/Users/my_user/test/lookup_test.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Print content\n ^ here\n"}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77535
|
https://github.com/ansible/ansible/pull/77609
|
6fdec4a6ab140462e86f2f0618a48243169fb2c9
|
3980eb8c09d170a861351f8aff4a1aa1a8cbb626
| 2022-04-14T09:45:06Z |
python
| 2022-04-26T15:16:22Z |
test/units/template/test_templar.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from jinja2.runtime import Context
from units.compat import unittest
from mock import patch
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleUndefinedVariable
from ansible.module_utils.six import string_types
from ansible.template import Templar, AnsibleContext, AnsibleEnvironment, AnsibleUndefined
from ansible.utils.unsafe_proxy import AnsibleUnsafe, wrap_var
from units.mock.loader import DictDataLoader
class BaseTemplar(object):
def setUp(self):
self.test_vars = dict(
foo="bar",
bam="{{foo}}",
num=1,
var_true=True,
var_false=False,
var_dict=dict(a="b"),
bad_dict="{a='b'",
var_list=[1],
recursive="{{recursive}}",
some_var="blip",
some_static_var="static_blip",
some_keyword="{{ foo }}",
some_unsafe_var=wrap_var("unsafe_blip"),
some_static_unsafe_var=wrap_var("static_unsafe_blip"),
some_unsafe_keyword=wrap_var("{{ foo }}"),
str_with_error="{{ 'str' | from_json }}",
)
self.fake_loader = DictDataLoader({
"/path/to/my_file.txt": "foo\n",
})
self.templar = Templar(loader=self.fake_loader, variables=self.test_vars)
self._ansible_context = AnsibleContext(self.templar.environment, {}, {}, {})
def is_unsafe(self, obj):
return self._ansible_context._is_unsafe(obj)
# class used for testing arbitrary objects passed to template
class SomeClass(object):
foo = 'bar'
def __init__(self):
self.blip = 'blip'
class SomeUnsafeClass(AnsibleUnsafe):
def __init__(self):
super(SomeUnsafeClass, self).__init__()
self.blip = 'unsafe blip'
class TestTemplarTemplate(BaseTemplar, unittest.TestCase):
def test_lookup_jinja_dict_key_in_static_vars(self):
res = self.templar.template("{'some_static_var': '{{ some_var }}'}",
static_vars=['some_static_var'])
# self.assertEqual(res['{{ a_keyword }}'], "blip")
print(res)
def test_is_possibly_template_true(self):
tests = [
'{{ foo }}',
'{% foo %}',
'{# foo #}',
'{# {{ foo }} #}',
'{# {{ nothing }} {# #}',
'{# {{ nothing }} {# #} #}',
'{% raw %}{{ foo }}{% endraw %}',
'{{',
'{%',
'{#',
'{% raw',
]
for test in tests:
self.assertTrue(self.templar.is_possibly_template(test))
def test_is_possibly_template_false(self):
tests = [
'{',
'%',
'#',
'foo',
'}}',
'%}',
'raw %}',
'#}',
]
for test in tests:
self.assertFalse(self.templar.is_possibly_template(test))
def test_is_possible_template(self):
"""This test ensures that a broken template still gets templated"""
# Purposefully invalid jinja
self.assertRaises(AnsibleError, self.templar.template, '{{ foo|default(False)) }}')
def test_is_template_true(self):
tests = [
'{{ foo }}',
'{% foo %}',
'{# foo #}',
'{# {{ foo }} #}',
'{# {{ nothing }} {# #}',
'{# {{ nothing }} {# #} #}',
'{% raw %}{{ foo }}{% endraw %}',
]
for test in tests:
self.assertTrue(self.templar.is_template(test))
def test_is_template_false(self):
tests = [
'foo',
'{{ foo',
'{% foo',
'{# foo',
'{{ foo %}',
'{{ foo #}',
'{% foo }}',
'{% foo #}',
'{# foo %}',
'{# foo }}',
'{{ foo {{',
'{% raw %}{% foo %}',
]
for test in tests:
self.assertFalse(self.templar.is_template(test))
def test_is_template_raw_string(self):
res = self.templar.is_template('foo')
self.assertFalse(res)
def test_is_template_none(self):
res = self.templar.is_template(None)
self.assertFalse(res)
def test_template_convert_bare_string(self):
res = self.templar.template('foo', convert_bare=True)
self.assertEqual(res, 'bar')
def test_template_convert_bare_nested(self):
res = self.templar.template('bam', convert_bare=True)
self.assertEqual(res, 'bar')
def test_template_convert_bare_unsafe(self):
res = self.templar.template('some_unsafe_var', convert_bare=True)
self.assertEqual(res, 'unsafe_blip')
# self.assertIsInstance(res, AnsibleUnsafe)
self.assertTrue(self.is_unsafe(res), 'returned value from template.template (%s) is not marked unsafe' % res)
def test_template_convert_bare_filter(self):
res = self.templar.template('bam|capitalize', convert_bare=True)
self.assertEqual(res, 'Bar')
def test_template_convert_bare_filter_unsafe(self):
res = self.templar.template('some_unsafe_var|capitalize', convert_bare=True)
self.assertEqual(res, 'Unsafe_blip')
# self.assertIsInstance(res, AnsibleUnsafe)
self.assertTrue(self.is_unsafe(res), 'returned value from template.template (%s) is not marked unsafe' % res)
def test_template_convert_data(self):
res = self.templar.template('{{foo}}', convert_data=True)
self.assertTrue(res)
self.assertEqual(res, 'bar')
def test_template_convert_data_template_in_data(self):
res = self.templar.template('{{bam}}', convert_data=True)
self.assertTrue(res)
self.assertEqual(res, 'bar')
def test_template_convert_data_bare(self):
res = self.templar.template('bam', convert_data=True)
self.assertTrue(res)
self.assertEqual(res, 'bam')
def test_template_convert_data_to_json(self):
res = self.templar.template('{{bam|to_json}}', convert_data=True)
self.assertTrue(res)
self.assertEqual(res, '"bar"')
def test_template_convert_data_convert_bare_data_bare(self):
res = self.templar.template('bam', convert_data=True, convert_bare=True)
self.assertTrue(res)
self.assertEqual(res, 'bar')
def test_template_unsafe_non_string(self):
unsafe_obj = AnsibleUnsafe()
res = self.templar.template(unsafe_obj)
self.assertTrue(self.is_unsafe(res), 'returned value from template.template (%s) is not marked unsafe' % res)
def test_template_unsafe_non_string_subclass(self):
unsafe_obj = SomeUnsafeClass()
res = self.templar.template(unsafe_obj)
self.assertTrue(self.is_unsafe(res), 'returned value from template.template (%s) is not marked unsafe' % res)
def test_weird(self):
data = u'''1 2 #}huh{# %}ddfg{% }}dfdfg{{ {%what%} {{#foo#}} {%{bar}%} {#%blip%#} {{asdfsd%} 3 4 {{foo}} 5 6 7'''
self.assertRaisesRegex(AnsibleError,
'template error while templating string',
self.templar.template,
data)
def test_template_with_error(self):
"""Check that AnsibleError is raised, fail if an unhandled exception is raised"""
self.assertRaises(AnsibleError, self.templar.template, "{{ str_with_error }}")
class TestTemplarMisc(BaseTemplar, unittest.TestCase):
def test_templar_simple(self):
templar = self.templar
# test some basic templating
self.assertEqual(templar.template("{{foo}}"), "bar")
self.assertEqual(templar.template("{{foo}}\n"), "bar\n")
self.assertEqual(templar.template("{{foo}}\n", preserve_trailing_newlines=True), "bar\n")
self.assertEqual(templar.template("{{foo}}\n", preserve_trailing_newlines=False), "bar")
self.assertEqual(templar.template("{{bam}}"), "bar")
self.assertEqual(templar.template("{{num}}"), 1)
self.assertEqual(templar.template("{{var_true}}"), True)
self.assertEqual(templar.template("{{var_false}}"), False)
self.assertEqual(templar.template("{{var_dict}}"), dict(a="b"))
self.assertEqual(templar.template("{{bad_dict}}"), "{a='b'")
self.assertEqual(templar.template("{{var_list}}"), [1])
self.assertEqual(templar.template(1, convert_bare=True), 1)
# force errors
self.assertRaises(AnsibleUndefinedVariable, templar.template, "{{bad_var}}")
self.assertRaises(AnsibleUndefinedVariable, templar.template, "{{lookup('file', bad_var)}}")
self.assertRaises(AnsibleError, templar.template, "{{lookup('bad_lookup')}}")
self.assertRaises(AnsibleError, templar.template, "{{recursive}}")
self.assertRaises(AnsibleUndefinedVariable, templar.template, "{{foo-bar}}")
# test with fail_on_undefined=False
self.assertEqual(templar.template("{{bad_var}}", fail_on_undefined=False), "{{bad_var}}")
# test setting available_variables
templar.available_variables = dict(foo="bam")
self.assertEqual(templar.template("{{foo}}"), "bam")
# variables must be a dict() for available_variables setter
# FIXME Use assertRaises() as a context manager (added in 2.7) once we do not run tests on Python 2.6 anymore.
try:
templar.available_variables = "foo=bam"
except AssertionError:
pass
except Exception as e:
self.fail(e)
def test_templar_escape_backslashes(self):
# Rule of thumb: If escape backslashes is True you should end up with
# the same number of backslashes as when you started.
self.assertEqual(self.templar.template("\t{{foo}}", escape_backslashes=True), "\tbar")
self.assertEqual(self.templar.template("\t{{foo}}", escape_backslashes=False), "\tbar")
self.assertEqual(self.templar.template("\\{{foo}}", escape_backslashes=True), "\\bar")
self.assertEqual(self.templar.template("\\{{foo}}", escape_backslashes=False), "\\bar")
self.assertEqual(self.templar.template("\\{{foo + '\t' }}", escape_backslashes=True), "\\bar\t")
self.assertEqual(self.templar.template("\\{{foo + '\t' }}", escape_backslashes=False), "\\bar\t")
self.assertEqual(self.templar.template("\\{{foo + '\\t' }}", escape_backslashes=True), "\\bar\\t")
self.assertEqual(self.templar.template("\\{{foo + '\\t' }}", escape_backslashes=False), "\\bar\t")
self.assertEqual(self.templar.template("\\{{foo + '\\\\t' }}", escape_backslashes=True), "\\bar\\\\t")
self.assertEqual(self.templar.template("\\{{foo + '\\\\t' }}", escape_backslashes=False), "\\bar\\t")
def test_template_jinja2_extensions(self):
fake_loader = DictDataLoader({})
templar = Templar(loader=fake_loader)
old_exts = C.DEFAULT_JINJA2_EXTENSIONS
try:
C.DEFAULT_JINJA2_EXTENSIONS = "foo,bar"
self.assertEqual(templar._get_extensions(), ['foo', 'bar'])
finally:
C.DEFAULT_JINJA2_EXTENSIONS = old_exts
class TestTemplarLookup(BaseTemplar, unittest.TestCase):
def test_lookup_missing_plugin(self):
self.assertRaisesRegex(AnsibleError,
r'lookup plugin \(not_a_real_lookup_plugin\) not found',
self.templar._lookup,
'not_a_real_lookup_plugin',
'an_arg', a_keyword_arg='a_keyword_arg_value')
def test_lookup_list(self):
res = self.templar._lookup('list', 'an_arg', 'another_arg')
self.assertEqual(res, 'an_arg,another_arg')
def test_lookup_jinja_undefined(self):
self.assertRaisesRegex(AnsibleUndefinedVariable,
"'an_undefined_jinja_var' is undefined",
self.templar._lookup,
'list', '{{ an_undefined_jinja_var }}')
def test_lookup_jinja_defined(self):
res = self.templar._lookup('list', '{{ some_var }}')
self.assertTrue(self.is_unsafe(res))
# self.assertIsInstance(res, AnsibleUnsafe)
def test_lookup_jinja_dict_string_passed(self):
self.assertRaisesRegex(AnsibleError,
"with_dict expects a dict",
self.templar._lookup,
'dict',
'{{ some_var }}')
def test_lookup_jinja_dict_list_passed(self):
self.assertRaisesRegex(AnsibleError,
"with_dict expects a dict",
self.templar._lookup,
'dict',
['foo', 'bar'])
def test_lookup_jinja_kwargs(self):
res = self.templar._lookup('list', 'blip', random_keyword='12345')
self.assertTrue(self.is_unsafe(res))
# self.assertIsInstance(res, AnsibleUnsafe)
def test_lookup_jinja_list_wantlist(self):
res = self.templar._lookup('list', '{{ some_var }}', wantlist=True)
self.assertEqual(res, ["blip"])
def test_lookup_jinja_list_wantlist_undefined(self):
self.assertRaisesRegex(AnsibleUndefinedVariable,
"'some_undefined_var' is undefined",
self.templar._lookup,
'list',
'{{ some_undefined_var }}',
wantlist=True)
def test_lookup_jinja_list_wantlist_unsafe(self):
res = self.templar._lookup('list', '{{ some_unsafe_var }}', wantlist=True)
for lookup_result in res:
self.assertTrue(self.is_unsafe(lookup_result))
# self.assertIsInstance(lookup_result, AnsibleUnsafe)
# Should this be an AnsibleUnsafe
# self.assertIsInstance(res, AnsibleUnsafe)
def test_lookup_jinja_dict(self):
res = self.templar._lookup('list', {'{{ a_keyword }}': '{{ some_var }}'})
self.assertEqual(res['{{ a_keyword }}'], "blip")
# TODO: Should this be an AnsibleUnsafe
# self.assertIsInstance(res['{{ a_keyword }}'], AnsibleUnsafe)
# self.assertIsInstance(res, AnsibleUnsafe)
def test_lookup_jinja_dict_unsafe(self):
res = self.templar._lookup('list', {'{{ some_unsafe_key }}': '{{ some_unsafe_var }}'})
self.assertTrue(self.is_unsafe(res['{{ some_unsafe_key }}']))
# self.assertIsInstance(res['{{ some_unsafe_key }}'], AnsibleUnsafe)
# TODO: Should this be an AnsibleUnsafe
# self.assertIsInstance(res, AnsibleUnsafe)
def test_lookup_jinja_dict_unsafe_value(self):
res = self.templar._lookup('list', {'{{ a_keyword }}': '{{ some_unsafe_var }}'})
self.assertTrue(self.is_unsafe(res['{{ a_keyword }}']))
# self.assertIsInstance(res['{{ a_keyword }}'], AnsibleUnsafe)
# TODO: Should this be an AnsibleUnsafe
# self.assertIsInstance(res, AnsibleUnsafe)
def test_lookup_jinja_none(self):
res = self.templar._lookup('list', None)
self.assertIsNone(res)
class TestAnsibleContext(BaseTemplar, unittest.TestCase):
def _context(self, variables=None):
variables = variables or {}
env = AnsibleEnvironment()
context = AnsibleContext(env, parent={}, name='some_context',
blocks={})
for key, value in variables.items():
context.vars[key] = value
return context
def test(self):
context = self._context()
self.assertIsInstance(context, AnsibleContext)
self.assertIsInstance(context, Context)
def test_resolve_unsafe(self):
context = self._context(variables={'some_unsafe_key': wrap_var('some_unsafe_string')})
res = context.resolve('some_unsafe_key')
# self.assertIsInstance(res, AnsibleUnsafe)
self.assertTrue(self.is_unsafe(res),
'return of AnsibleContext.resolve (%s) was expected to be marked unsafe but was not' % res)
def test_resolve_unsafe_list(self):
context = self._context(variables={'some_unsafe_key': [wrap_var('some unsafe string 1')]})
res = context.resolve('some_unsafe_key')
# self.assertIsInstance(res[0], AnsibleUnsafe)
self.assertTrue(self.is_unsafe(res),
'return of AnsibleContext.resolve (%s) was expected to be marked unsafe but was not' % res)
def test_resolve_unsafe_dict(self):
context = self._context(variables={'some_unsafe_key':
{'an_unsafe_dict': wrap_var('some unsafe string 1')}
})
res = context.resolve('some_unsafe_key')
self.assertTrue(self.is_unsafe(res['an_unsafe_dict']),
'return of AnsibleContext.resolve (%s) was expected to be marked unsafe but was not' % res['an_unsafe_dict'])
def test_resolve(self):
context = self._context(variables={'some_key': 'some_string'})
res = context.resolve('some_key')
self.assertEqual(res, 'some_string')
# self.assertNotIsInstance(res, AnsibleUnsafe)
self.assertFalse(self.is_unsafe(res),
'return of AnsibleContext.resolve (%s) was not expected to be marked unsafe but was' % res)
def test_resolve_none(self):
context = self._context(variables={'some_key': None})
res = context.resolve('some_key')
self.assertEqual(res, None)
# self.assertNotIsInstance(res, AnsibleUnsafe)
self.assertFalse(self.is_unsafe(res),
'return of AnsibleContext.resolve (%s) was not expected to be marked unsafe but was' % res)
def test_is_unsafe(self):
context = self._context()
self.assertFalse(context._is_unsafe(AnsibleUndefined()))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,686 |
In 2.12 top-level include stops working, but removal is scheduled for 2.16
|
### Summary
I have a top-level include like so:
```
- include: foo.yml
```
This worked fine in 2.11, but after updating to 2.12.1, I get:
~~~
ERROR! 'include' is not a valid attribute for a Play
~~~
I know `include` is deprecated and going away, but per the doc this shouldn't happen until 2.16.
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.1]
config file = /home/tdowns/.ansible.cfg
configured module search path = ['/home/tdowns/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/tdowns/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(no output)
```
### OS / Environment
Ubuntu 21.10
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- include: foo.yml
```
### Expected Results
No error, perhaps a deprecation warning depending on your deprecation warnings setting.
### Actual Results
```console
ansible-playbook bar.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! 'include' is not a valid attribute for a Play
The error appears to be in '.../bar.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- include: foo.yml
^ here
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76686
|
https://github.com/ansible/ansible/pull/77597
|
709ae7c050c8ad5aa0565b8d023303407f7ca9b6
|
d321fa3f15cefa853d646985b2a201233cdb47c8
| 2022-01-09T06:42:01Z |
python
| 2022-04-26T18:23:24Z |
lib/ansible/modules/_include.py
|
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
author: Ansible Core Team (@ansible)
module: include
short_description: Include a play or task list
description:
- Includes a file with a list of plays or tasks to be executed in the current playbook.
- Files with a list of plays can only be included at the top level. Lists of tasks can only be included where tasks
normally run (in play).
- Before Ansible 2.0, all includes were 'static' and were executed when the play was compiled.
- Static includes are not subject to most directives. For example, loops or conditionals are applied instead to each
inherited task.
- Since Ansible 2.0, task includes are dynamic and behave more like real tasks. This means they can be looped,
skipped and use variables from any source. Ansible tries to auto detect this, but you can use the C(static)
directive (which was added in Ansible 2.1) to bypass autodetection.
- This module is also supported for Windows targets.
version_added: "0.6"
deprecated:
why: it has too many conflicting behaviours depending on keyword combinations and it was unclear how it should behave in each case.
new actions were developed that were specific about each case and related behaviours.
alternative: include_tasks, import_tasks, import_playbook
removed_in: "2.16"
removed_from_collection: 'ansible.builtin'
options:
free-form:
description:
- This module allows you to specify the name of the file directly without any other options.
notes:
- This is a core feature of Ansible, rather than a module, and cannot be overridden like a module.
- Include has some unintuitive behaviours depending on if it is running in a static or dynamic in play or in playbook context,
in an effort to clarify behaviours we are moving to a new set modules (M(ansible.builtin.include_tasks),
M(ansible.builtin.include_role), M(ansible.builtin.import_playbook), M(ansible.builtin.import_tasks))
that have well established and clear behaviours.
- B(This module will still be supported for some time but we are looking at deprecating it in the near future.)
seealso:
- module: ansible.builtin.import_playbook
- module: ansible.builtin.import_role
- module: ansible.builtin.import_tasks
- module: ansible.builtin.include_role
- module: ansible.builtin.include_tasks
- ref: playbooks_reuse_includes
description: More information related to including and importing playbooks, roles and tasks.
'''
EXAMPLES = r'''
- hosts: localhost
tasks:
- ansible.builtin.debug:
msg: play1
- name: Include a play after another play
ansible.builtin.include: otherplays.yaml
- hosts: all
tasks:
- ansible.builtin.debug:
msg: task1
- name: Include task list in play
ansible.builtin.include: stuff.yaml
- ansible.builtin.debug:
msg: task10
- hosts: all
tasks:
- ansible.builtin.debug:
msg: task1
- name: Include task list in play only if the condition is true
ansible.builtin.include: "{{ hostvar }}.yaml"
static: no
when: hostvar is defined
'''
RETURN = r'''
# This module does not return anything except plays or tasks to execute.
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,660 |
Implement pre-flight checks for blocking stdin, stdout, stderr
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` to ensure that `stdin`, `stdout`, and `stderr` are blocking, and not non-blocking. The `Display` class cannot handle the non-blocking case, and it's been decided to not attempt work arounds to allow this.
We've experienced this happening in subprocesses where something like `ssh` sets these FDs as non-blocking and doesn't restore them to blocking, causing tracebacks.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77660
|
https://github.com/ansible/ansible/pull/77668
|
cc872a57f2e04924a0fc31bf4fed6400d189e265
|
f7c2b1986c5b6afce1d8fe83ce6bf26b535aa617
| 2022-04-27T15:22:47Z |
python
| 2022-04-27T23:28:06Z |
changelogs/fragments/ansible-require-blocking-io.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,660 |
Implement pre-flight checks for blocking stdin, stdout, stderr
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` to ensure that `stdin`, `stdout`, and `stderr` are blocking, and not non-blocking. The `Display` class cannot handle the non-blocking case, and it's been decided to not attempt work arounds to allow this.
We've experienced this happening in subprocesses where something like `ssh` sets these FDs as non-blocking and doesn't restore them to blocking, causing tracebacks.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77660
|
https://github.com/ansible/ansible/pull/77668
|
cc872a57f2e04924a0fc31bf4fed6400d189e265
|
f7c2b1986c5b6afce1d8fe83ce6bf26b535aa617
| 2022-04-27T15:22:47Z |
python
| 2022-04-27T23:28:06Z |
lib/ansible/cli/__init__.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
if sys.version_info < (3, 8):
raise SystemExit(
'ERROR: Ansible requires Python 3.8 or newer on the controller. '
'Current version: %s' % ''.join(sys.version.splitlines())
)
from importlib.metadata import version
from ansible.module_utils.compat.version import LooseVersion
# Used for determining if the system is running a new enough Jinja2 version
# and should only restrict on our documented minimum versions
jinja2_version = version('jinja2')
if jinja2_version < LooseVersion('3.0'):
raise SystemExit(
'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. '
'Current version: %s' % jinja2_version
)
import errno
import getpass
import os
import subprocess
import traceback
from abc import ABC, abstractmethod
from pathlib import Path
try:
from ansible import constants as C
from ansible.utils.display import Display, initialize_locale
initialize_locale()
display = Display()
except Exception as e:
print('ERROR: %s' % e, file=sys.stderr)
sys.exit(5)
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common.file import is_executable
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.release import __version__
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.path import unfrackpath
from ansible.utils.unsafe_proxy import to_unsafe_text
from ansible.vars.manager import VariableManager
try:
import argcomplete
HAS_ARGCOMPLETE = True
except ImportError:
HAS_ARGCOMPLETE = False
class CLI(ABC):
''' code behind bin/ansible* programs '''
PAGER = 'less'
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
LESS_OPTS = 'FRSX'
SKIP_INVENTORY_DEFAULTS = False
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
if not args:
raise ValueError('A non-empty list for args is required')
self.args = args
self.parser = None
self.callback = callback
if C.DEVEL_WARNING and __version__.endswith('dev0'):
display.warning(
'You are running the development version of Ansible. You should only run Ansible from "devel" if '
'you are modifying the Ansible engine, or trying out features under development. This is a rapidly '
'changing source of code and can become unstable at any point.'
)
@abstractmethod
def run(self):
"""Run the ansible command
Subclasses must implement this method. It does the actual work of
running an Ansible command.
"""
self.parse()
display.vv(to_text(opt_help.version(self.parser.prog)))
if C.CONFIG_FILE:
display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE))
else:
display.v(u"No config file found; using defaults")
# warn about deprecated config options
for deprecated in C.config.DEPRECATED:
name = deprecated[0]
why = deprecated[1]['why']
if 'alternatives' in deprecated[1]:
alt = ', use %s instead' % deprecated[1]['alternatives']
else:
alt = ''
ver = deprecated[1].get('version')
date = deprecated[1].get('date')
collection_name = deprecated[1].get('collection_name')
display.deprecated("%s option, %s%s" % (name, why, alt),
version=ver, date=date, collection_name=collection_name)
@staticmethod
def split_vault_id(vault_id):
# return (before_@, after_@)
# if no @, return whole string as after_
if '@' not in vault_id:
return (None, vault_id)
parts = vault_id.split('@', 1)
ret = tuple(parts)
return ret
@staticmethod
def build_vault_ids(vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=None,
auto_prompt=True):
vault_password_files = vault_password_files or []
vault_ids = vault_ids or []
# convert vault_password_files into vault_ids slugs
for password_file in vault_password_files:
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file)
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
vault_ids.append(id_slug)
# if an action needs an encrypt password (create_new_password=True) and we dont
# have other secrets setup, then automatically add a password prompt as well.
# prompts cant/shouldnt work without a tty, so dont add prompt secrets
if ask_vault_pass or (not vault_ids and auto_prompt):
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass')
vault_ids.append(id_slug)
return vault_ids
# TODO: remove the now unused args
@staticmethod
def setup_vault_secrets(loader, vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=False,
auto_prompt=True):
# list of tuples
vault_secrets = []
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
prompt_formats = {}
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
vault_password_files = vault_password_files or []
if C.DEFAULT_VAULT_PASSWORD_FILE:
vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE)
if create_new_password:
prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ',
'Confirm new vault password (%(vault_id)s): ']
# 2.3 format prompts for --ask-vault-pass
prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ',
'Confirm New Vault password: ']
else:
prompt_formats['prompt'] = ['Vault password (%(vault_id)s): ']
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
prompt_formats['prompt_ask_vault_pass'] = ['Vault password: ']
vault_ids = CLI.build_vault_ids(vault_ids,
vault_password_files,
ask_vault_pass,
create_new_password,
auto_prompt=auto_prompt)
last_exception = found_vault_secret = None
for vault_id_slug in vault_ids:
vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug)
if vault_id_value in ['prompt', 'prompt_ask_vault_pass']:
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value],
vault_id=built_vault_id)
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
try:
prompted_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc))
raise
found_vault_secret = True
vault_secrets.append((built_vault_id, prompted_vault_secret))
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
loader.set_vault_secrets(vault_secrets)
continue
# assuming anything else is a password file
display.vvvvv('Reading vault password file: %s' % vault_id_value)
# read vault_pass from a file
try:
file_vault_secret = get_file_vault_secret(filename=vault_id_value,
vault_id=vault_id_name,
loader=loader)
except AnsibleError as exc:
display.warning('Error getting vault password file (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
try:
file_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
found_vault_secret = True
if vault_id_name:
vault_secrets.append((vault_id_name, file_vault_secret))
else:
vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret))
# update loader with as-yet-known vault secrets
loader.set_vault_secrets(vault_secrets)
# An invalid or missing password file will error globally
# if no valid vault secret was found.
if last_exception and not found_vault_secret:
raise last_exception
return vault_secrets
@staticmethod
def _get_secret(prompt):
secret = getpass.getpass(prompt=prompt)
if secret:
secret = to_unsafe_text(secret)
return secret
@staticmethod
def ask_passwords():
''' prompt for connection and become passwords if needed '''
op = context.CLIARGS
sshpass = None
becomepass = None
become_prompt = ''
become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper()
try:
become_prompt = "%s password: " % become_prompt_method
if op['ask_pass']:
sshpass = CLI._get_secret("SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method
elif op['connection_password_file']:
sshpass = CLI.get_password_from_file(op['connection_password_file'])
if op['become_ask_pass']:
becomepass = CLI._get_secret(become_prompt)
if op['ask_pass'] and becomepass == '':
becomepass = sshpass
elif op['become_password_file']:
becomepass = CLI.get_password_from_file(op['become_password_file'])
except EOFError:
pass
return (sshpass, becomepass)
def validate_conflicts(self, op, runas_opts=False, fork_opts=False):
''' check for conflicting options '''
if fork_opts:
if op.forks < 1:
self.parser.error("The number of processes (--forks) must be >= 1")
return op
@abstractmethod
def init_parser(self, usage="", desc=None, epilog=None):
"""
Create an options parser for most ansible scripts
Subclasses need to implement this method. They will usually call the base class's
init_parser to create a basic version and then add their own options on top of that.
An implementation will look something like this::
def init_parser(self):
super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True)
ansible.arguments.option_helpers.add_runas_options(self.parser)
self.parser.add_option('--my-option', dest='my_option', action='store')
"""
self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog)
@abstractmethod
def post_process_args(self, options):
"""Process the command line args
Subclasses need to implement this method. This method validates and transforms the command
line arguments. It can be used to check whether conflicting values were given, whether filenames
exist, etc.
An implementation will look something like this::
def post_process_args(self, options):
options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types):
options.listofhosts = string_types.split(',')
return options
"""
# process tags
if hasattr(options, 'tags') and not options.tags:
# optparse defaults does not do what's expected
# More specifically, we want `--tags` to be additive. So we cannot
# simply change C.TAGS_RUN's default to ["all"] because then passing
# --tags foo would cause us to have ['all', 'foo']
options.tags = ['all']
if hasattr(options, 'tags') and options.tags:
tags = set()
for tag_set in options.tags:
for tag in tag_set.split(u','):
tags.add(tag.strip())
options.tags = list(tags)
# process skip_tags
if hasattr(options, 'skip_tags') and options.skip_tags:
skip_tags = set()
for tag_set in options.skip_tags:
for tag in tag_set.split(u','):
skip_tags.add(tag.strip())
options.skip_tags = list(skip_tags)
# process inventory options except for CLIs that require their own processing
if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS:
if options.inventory:
# should always be list
if isinstance(options.inventory, string_types):
options.inventory = [options.inventory]
# Ensure full paths when needed
options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory]
else:
options.inventory = C.DEFAULT_HOST_LIST
return options
def parse(self):
"""Parse the command line args
This method parses the command line arguments. It uses the parser
stored in the self.parser attribute and saves the args and options in
context.CLIARGS.
Subclasses need to implement two helper methods, init_parser() and post_process_args() which
are called from this function before and after parsing the arguments.
"""
self.init_parser()
if HAS_ARGCOMPLETE:
argcomplete.autocomplete(self.parser)
try:
options = self.parser.parse_args(self.args[1:])
except SystemExit as e:
if(e.code != 0):
self.parser.exit(status=2, message=" \n%s" % self.parser.format_help())
raise
options = self.post_process_args(options)
context._init_global_context(options)
@staticmethod
def version_info(gitinfo=False):
''' return full ansible version info '''
if gitinfo:
# expensive call, user with care
ansible_version_string = opt_help.version()
else:
ansible_version_string = __version__
ansible_version = ansible_version_string.split()[0]
ansible_versions = ansible_version.split('.')
for counter in range(len(ansible_versions)):
if ansible_versions[counter] == "":
ansible_versions[counter] = 0
try:
ansible_versions[counter] = int(ansible_versions[counter])
except Exception:
pass
if len(ansible_versions) < 3:
for counter in range(len(ansible_versions), 3):
ansible_versions.append(0)
return {'string': ansible_version_string.strip(),
'full': ansible_version,
'major': ansible_versions[0],
'minor': ansible_versions[1],
'revision': ansible_versions[2]}
@staticmethod
def pager(text):
''' find reasonable way to display text '''
# this is a much simpler form of what is in pydoc.py
if not sys.stdout.isatty():
display.display(text, screen_only=True)
elif 'PAGER' in os.environ:
if sys.platform == 'win32':
display.display(text, screen_only=True)
else:
CLI.pager_pipe(text, os.environ['PAGER'])
else:
p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode == 0:
CLI.pager_pipe(text, 'less')
else:
display.display(text, screen_only=True)
@staticmethod
def pager_pipe(text, cmd):
''' pipe text through a pager '''
if 'LESS' not in os.environ:
os.environ['LESS'] = CLI.LESS_OPTS
try:
cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout)
cmd.communicate(input=to_bytes(text))
except IOError:
pass
except KeyboardInterrupt:
pass
@staticmethod
def _play_prereqs():
options = context.CLIARGS
# all needs loader
loader = DataLoader()
basedir = options.get('basedir', False)
if basedir:
loader.set_basedir(basedir)
add_all_plugin_dirs(basedir)
AnsibleCollectionConfig.playbook_paths = basedir
default_collection = _get_collection_name_from_path(basedir)
if default_collection:
display.warning(u'running with default collection {0}'.format(default_collection))
AnsibleCollectionConfig.default_collection = default_collection
vault_ids = list(options['vault_ids'])
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
vault_secrets = CLI.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(options['vault_password_files']),
ask_vault_pass=options['ask_vault_pass'],
auto_prompt=False)
loader.set_vault_secrets(vault_secrets)
# create the inventory, and filter it based on the subset specified (if any)
inventory = InventoryManager(loader=loader, sources=options['inventory'], cache=(not options.get('flush_cache')))
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
return loader, inventory, variable_manager
@staticmethod
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified inventory, host pattern and/or --limit leaves us with no hosts to target.")
return hosts
@staticmethod
def get_password_from_file(pwd_file):
b_pwd_file = to_bytes(pwd_file)
secret = None
if b_pwd_file == b'-':
# ensure its read as bytes
secret = sys.stdin.buffer.read()
elif not os.path.exists(b_pwd_file):
raise AnsibleError("The password file %s was not found" % pwd_file)
elif is_executable(b_pwd_file):
display.vvvv(u'The password file %s is a script.' % to_text(pwd_file))
cmd = [b_pwd_file]
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("Problem occured when trying to run the password script %s (%s)."
" If this is not a script, remove the executable bit from the file." % (pwd_file, e))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr))
secret = stdout
else:
try:
f = open(b_pwd_file, "rb")
secret = f.read().strip()
f.close()
except (OSError, IOError) as e:
raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e))
secret = secret.strip(b'\r\n')
if not secret:
raise AnsibleError('Empty password was provided from file (%s)' % pwd_file)
return to_unsafe_text(secret)
@classmethod
def cli_executor(cls, args=None):
if args is None:
args = sys.argv
try:
display.debug("starting run")
ansible_dir = Path(C.ANSIBLE_HOME).expanduser()
try:
ansible_dir.mkdir(mode=0o700)
except OSError as exc:
if exc.errno != errno.EEXIST:
display.warning(
"Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace'))
)
else:
display.debug("Created the '%s' directory" % ansible_dir)
try:
args = [to_text(a, errors='surrogate_or_strict') for a in args]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = cls(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,799 |
clear documentation of versioning of stable versions
|
### Summary
Needing a new ansible feature I wanted to see if a newer ansible Debian package is available somewhere.
I went and checked [available Debian packages](https://packages.debian.org/search?keywords=ansible). Ouuugh, there is v4.6.0 available in Debian experimental? Where the hell is that version number from?
To find out I checked [packages in Ubuntu](https://packages.ubuntu.com/search?keywords=ansible). No such thing as v4.6 there. So I googled ["download ansible"](https://duckduckgo.com/?t=ffab&q=download+ansible&ia=web). First two hits on duckduck go are Redhat corporate crap web pages, but the third is the [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) docu.
So I went to the "Installing Ansible" docu, clicked on "Installing Ansible in Debian" read the text about the PPAs, scrolled back up to "Installing Ansible on Ubuntu" saw the [PPA link](https://launchpad.net/~ansible/+archive/ubuntu/ansible) and went there.
Now the PPA has a whole truck load full of versions among which figures a v5.2.0. Now how would I know which version is a stable release (as opposed to a dev build)? No word on the PPA page about stable/dev. Back to the [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) docu and scanning for relevant information there.
The chapter [Finding tarballs of tagged releases](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#finding-tarballs-of-tagged-releases) says "You can download the latest stable release from [PyPI’s ansible package page](https://pypi.org/project/ansible/). OK, lets follow that rabbit hole then.
The latter page comes up with "ansible 5.2.0" on top. All right so is 5.2.0 the curstable release? That page says:
xxxxx
Branch Info
- The devel branch corresponds to the release actively under development.
- The stable-2.X branches correspond to stable releases.
- Create a branch based on devel and set up a dev environment if you want to open a PR.
- See the [Ansible release and maintenance page](https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html) for information about active branches.
xxxxx
So in that case 5.2.0 is not a stable release, but only 2.X branches are? Maybe check out the [Ansible release and maintenance page](https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html) as that paragraph above suggest?
So let's follow that link which shows a page saying "See the the [devel release and maintenance page](https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html) for up to date information.". So on through a yet another link and page to that page now.
Top of that page reads "You are reading the devel version of the Ansible documentation - this version is not guaranteed stable. Use the version selection to the left if you want the latest stable released version.". Oh. OK. However no link to the "stable" version there, however the drop down to select the version of the docu allows to select versions "devel", "2.9" and "latest*. What would the latest stable version be now? "2.9"? "latest"? But maybe that dev page will tell me anyway what the latest stable version is? Well there is a **LOT** of text to read there to find out what the secret(?!?)/obfuscated(?why?)/mysterious(?!) latest stable version is. Let's see... parsing that looong and complicated document seems to imply that it is the "Ansible community package" that I want...
And it *seems* that versions v4.x and v5.x are currently maintained. But then I know that the feature I want was tagged with 2.11. Now that can't be the "Ansible community package" version ... or **could it**? Who knows, my head is allready full with info I had to read only to find out and I still don't know what to download. So let's follow one more link, namely to the [5.x Changelog](https://github.com/ansible-community/ansible-build-data/blob/main/5/CHANGELOG-v5.rst), read through that and lo and behold there it is, somewhere a few pages of scrolling down it says "Ansible 5.2.0 contains Ansible-core version 2.12.1".
Halleluya!
Maybe you are think to yourself right now "boy that sucks reading tickets by noobs", but let me tell you, having to spend one hour of reading to find out what the latest stable ansible version is sucks too. IMHO finding out what the latest stable release is should be as simple as the "simple" in the "simple automation system". Is it possible to clear this jungle up and make it approachable by humans that don't want to spend an hour only to find out what the latest stable release is?
I'd certainly suggest a patch if I could, but I'm lost at this point. There are too many pages, with too much info and it's much, much, much too complicated. Help!?
Thanks a lot for ansible!
PS: Even through all of this I *still* haven't found out where the "Ansible community package" versions are coming from (are they tags? Where?)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/release_and_maintenance.rst
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/USER/.ansible.cfg
configured module search path = [EDITED, '/usr/share/ansible']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
doesn't apply
```
### OS / Environment
doesn't apply
### Additional Information
doesn't apply
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76799
|
https://github.com/ansible/ansible/pull/77577
|
f7c2b1986c5b6afce1d8fe83ce6bf26b535aa617
|
f633b62cbc995befe47d98c735d57baacb9cda2e
| 2022-01-19T17:56:57Z |
python
| 2022-04-28T13:12:49Z |
docs/docsite/.templates/banner.html
|
{% if is_eol %}
{# Creates a banner at the top of the page for EOL versions. #}
<div id='banner' class='Admonition caution'>
<p>You are reading an unmaintained version of the Ansible documentation. Unmaintained Ansible versions can contain unfixed security vulnerabilities (CVE). Please upgrade to a maintained version. See <a href="/ansible/latest/">the latest Ansible documentation</a>.</p>
</div>
{% else %}
<script>
function startsWith(str, needle) {
return str.slice(0, needle.length) == needle
}
function startsWithOneOf(str, needles) {
return needles.some(function (needle) {
return startsWith(str, needle);
});
}
var banner = '';
var extra_banner = '';
/*use extra_banner for when marketing wants something extra, like a survey or AnsibleFest notice
var extra_banner =
'<div id="latest_extra_banner_id" class="admonition important">' +
'<br>' +
'<p>' +
'You\'re invited to AnsibleFest 2021! ' +
'<p>' +
'Explore ways to automate, innovate, and accelerate with our free virtual event September 29-30. ' +
'<pr>' +
'<a href="https://reg.ansiblefest.redhat.com/flow/redhat/ansible21/regGenAttendee/login?sc_cid=7013a000002pemAAAQ">Register now!</a> ' +
'</p>' +
'<br>' +
'</div>'; */
// Create a banner if we're not on the official docs site
if (location.host == "docs.testing.ansible.com") {
document.write('<div id="testing_banner_id" class="admonition important">' +
'<p>This is the testing site for Ansible Documentation. Unless you are reviewing pre-production changes, please visit the <a href="https://docs.ansible.com/ansible/latest/">official documentation website</a>.</p> <p></p>' +
'</div>');
}
{% if available_versions is defined %}
// Create a banner
current_url_path = window.location.pathname;
var important = false;
var msg = '<p>';
if (startsWith(current_url_path, "/ansible-core/")) {
msg += 'You are reading documentation for Ansible Core, which contains no plugins except for those in ansible.builtin. For documentation of the Ansible package, go to <a href="/ansible/latest">the latest documentation</a>.';
} else if (startsWithOneOf(current_url_path, ["/ansible/latest/", "/ansible/{{ latest_version }}/"])) {
/* temp extra banner to advertise AnsibeFest2021 */
banner += extra_banner;
msg += 'You are reading the latest community version of the Ansible documentation. Red Hat subscribers, select <b>2.9</b> in the version selection to the left for the most recent Red Hat release.';
} else if (startsWith(current_url_path, "/ansible/2.9/")) {
msg += 'You are reading the latest Red Hat released version of the Ansible documentation. Community users can use this version, or select <b>latest</b> from the version selector to the left for the most recent community version.';
} else if (startsWith(current_url_path, "/ansible/devel/")) {
/* temp extra banner to advertise AnsibleFest2021 */
banner += extra_banner;
/* temp banner to advertise survey
important = true;
msg += 'Please take our <a href="https://www.surveymonkey.co.uk/r/B9V3CDY">Docs survey</a> before December 31 to help us improve Ansible documentation.';
*/
msg += 'You are reading the <b>devel</b> version of the Ansible documentation - this version is not guaranteed stable. Use the version selection to the left if you want the latest stable released version.';
} else {
msg += 'You are reading an older version of the Ansible documentation. Use the version selection to the left if you want the latest stable released version.';
}
msg += '</p>';
banner += '<div id="banner_id" class="admonition ';
banner += important ? 'important' : 'caution';
banner += '">';
banner += important ? '<br>' : '';
banner += msg;
banner += important ? '<br>' : '';
banner += '</div>';
document.write(banner);
{% endif %}
</script>
{% endif %}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,799 |
clear documentation of versioning of stable versions
|
### Summary
Needing a new ansible feature I wanted to see if a newer ansible Debian package is available somewhere.
I went and checked [available Debian packages](https://packages.debian.org/search?keywords=ansible). Ouuugh, there is v4.6.0 available in Debian experimental? Where the hell is that version number from?
To find out I checked [packages in Ubuntu](https://packages.ubuntu.com/search?keywords=ansible). No such thing as v4.6 there. So I googled ["download ansible"](https://duckduckgo.com/?t=ffab&q=download+ansible&ia=web). First two hits on duckduck go are Redhat corporate crap web pages, but the third is the [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) docu.
So I went to the "Installing Ansible" docu, clicked on "Installing Ansible in Debian" read the text about the PPAs, scrolled back up to "Installing Ansible on Ubuntu" saw the [PPA link](https://launchpad.net/~ansible/+archive/ubuntu/ansible) and went there.
Now the PPA has a whole truck load full of versions among which figures a v5.2.0. Now how would I know which version is a stable release (as opposed to a dev build)? No word on the PPA page about stable/dev. Back to the [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) docu and scanning for relevant information there.
The chapter [Finding tarballs of tagged releases](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#finding-tarballs-of-tagged-releases) says "You can download the latest stable release from [PyPI’s ansible package page](https://pypi.org/project/ansible/). OK, lets follow that rabbit hole then.
The latter page comes up with "ansible 5.2.0" on top. All right so is 5.2.0 the curstable release? That page says:
xxxxx
Branch Info
- The devel branch corresponds to the release actively under development.
- The stable-2.X branches correspond to stable releases.
- Create a branch based on devel and set up a dev environment if you want to open a PR.
- See the [Ansible release and maintenance page](https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html) for information about active branches.
xxxxx
So in that case 5.2.0 is not a stable release, but only 2.X branches are? Maybe check out the [Ansible release and maintenance page](https://docs.ansible.com/ansible/latest/reference_appendices/release_and_maintenance.html) as that paragraph above suggest?
So let's follow that link which shows a page saying "See the the [devel release and maintenance page](https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html) for up to date information.". So on through a yet another link and page to that page now.
Top of that page reads "You are reading the devel version of the Ansible documentation - this version is not guaranteed stable. Use the version selection to the left if you want the latest stable released version.". Oh. OK. However no link to the "stable" version there, however the drop down to select the version of the docu allows to select versions "devel", "2.9" and "latest*. What would the latest stable version be now? "2.9"? "latest"? But maybe that dev page will tell me anyway what the latest stable version is? Well there is a **LOT** of text to read there to find out what the secret(?!?)/obfuscated(?why?)/mysterious(?!) latest stable version is. Let's see... parsing that looong and complicated document seems to imply that it is the "Ansible community package" that I want...
And it *seems* that versions v4.x and v5.x are currently maintained. But then I know that the feature I want was tagged with 2.11. Now that can't be the "Ansible community package" version ... or **could it**? Who knows, my head is allready full with info I had to read only to find out and I still don't know what to download. So let's follow one more link, namely to the [5.x Changelog](https://github.com/ansible-community/ansible-build-data/blob/main/5/CHANGELOG-v5.rst), read through that and lo and behold there it is, somewhere a few pages of scrolling down it says "Ansible 5.2.0 contains Ansible-core version 2.12.1".
Halleluya!
Maybe you are think to yourself right now "boy that sucks reading tickets by noobs", but let me tell you, having to spend one hour of reading to find out what the latest stable ansible version is sucks too. IMHO finding out what the latest stable release is should be as simple as the "simple" in the "simple automation system". Is it possible to clear this jungle up and make it approachable by humans that don't want to spend an hour only to find out what the latest stable release is?
I'd certainly suggest a patch if I could, but I'm lost at this point. There are too many pages, with too much info and it's much, much, much too complicated. Help!?
Thanks a lot for ansible!
PS: Even through all of this I *still* haven't found out where the "Ansible community package" versions are coming from (are they tags? Where?)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/release_and_maintenance.rst
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/USER/.ansible.cfg
configured module search path = [EDITED, '/usr/share/ansible']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
$ ansible-config dump --only-changed
doesn't apply
```
### OS / Environment
doesn't apply
### Additional Information
doesn't apply
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76799
|
https://github.com/ansible/ansible/pull/77577
|
f7c2b1986c5b6afce1d8fe83ce6bf26b535aa617
|
f633b62cbc995befe47d98c735d57baacb9cda2e
| 2022-01-19T17:56:57Z |
python
| 2022-04-28T13:12:49Z |
docs/docsite/rst/reference_appendices/release_and_maintenance.rst
|
.. _release_and_maintenance:
************************
Releases and maintenance
************************
This section describes release cycles, rules, and maintenance schedules for both Ansible community projects: the Ansible community package and ``ansible-core``. The two projects have different versioning systems, maintenance structures, contents, and workflows.
==================================================== ========================================================
Ansible community package ansible-core
==================================================== ========================================================
Uses new versioning (2.10, then 3.0.0) Continues "classic Ansible" versioning (2.11, then 2.12)
Follows semantic versioning rules Does not use semantic versioning
Maintains only one version at a time Maintains latest version plus two older versions
Includes language, runtime, and selected Collections Includes language, runtime, and builtin plugins
Developed and maintained in Collection repositories Developed and maintained in ansible/ansible repository
==================================================== ========================================================
Many community users install the Ansible community package. The Ansible community package offers the functionality that existed in Ansible 2.9, with more than 85 Collections containing thousands of modules and plugins. The ``ansible-core`` option is primarily for developers and users who want to install only the collections they need.
.. contents::
:local:
.. _release_cycle:
Release cycle overview
======================
The two community releases are related - the release cycle follows this pattern:
#. Release of a new ansible-core major version, for example, ansible-core 2.11
* New release of ansible-core and two prior versions are now maintained (in this case, ansible-base 2.10, Ansible 2.9)
* Work on new features for ansible-core continues in the ``devel`` branch
#. Collection freeze (no new Collections or new versions of existing Collections) on the Ansible community package
#. Release candidate for Ansible community package, testing, additional release candidates as necessary
#. Release of a new Ansible community package major version based on the new ansible-core, for example, Ansible 4.0.0 based on ansible-core 2.11
* Newest release of the Ansible community package is the only version now maintained
* Work on new features continues in Collections
* Individual Collections can make multiple minor and/or major releases
#. Minor releases of three maintained ansible-core versions every three weeks (2.11.1)
#. Minor releases of the single maintained Ansible community package version every three weeks (4.1.0)
#. Feature freeze on ansible-core
#. Release candidate for ansible-core, testing, additional release candidates as necessary
#. Release of the next ansible-core major version, cycle begins again
Ansible community package release cycle
---------------------------------------
The Ansible community team typically releases two major versions of the community package per year, on a flexible release cycle that trails the release of ``ansible-core``. This cycle can be extended to allow for larger changes to be properly implemented and tested before a new release is made available. See :ref:`ansible_roadmaps` for upcoming release details. Between major versions, we release a new minor version of the Ansible community package every three weeks. Minor releases include new backwards-compatible features, modules and plugins, as well as bug fixes.
Starting with version 2.10, the Ansible community team guarantees maintenance for only one major community package release at a time. For example, when Ansible 4.0.0 gets released, the team will stop making new 3.x releases. Community members may maintain older versions if desired.
.. note::
Older, unmaintained versions of the Ansible community package might contain unfixed security vulnerabilities (*CVEs*). If you are using a release of the Ansible community package that is no longer maintained, we strongly encourage you to upgrade as soon as possible in order to benefit from the latest features and security fixes.
Each major release of the Ansible community package accepts the latest released version of each included Collection and the latest released version of ansible-core. For specific schedules and deadlines, see the :ref:`ansible_roadmaps` for each version. Major releases of the Ansible community package can contain breaking changes in the modules and other plugins within the included Collections and/or in core features.
The Ansible community package follows semantic versioning rules. Minor releases of the Ansible community package accept only backwards-compatible changes in included Collections, in other words, not Collections major releases. Collections must also use semantic versioning, so the Collection version numbers reflect this rule. For example, if Ansible 3.0.0 releases with community.general 2.0.0, then all minor releases of Ansible 3.x (such as Ansible 3.1.0 or Ansible 3.5.0) must include a 2.x release of community.general (such as 2.8.0 or 2.9.5) and not 3.x.x or later major releases.
Work in Collections is tracked within the individual Collection repositories.
You can refer to the :ref:`Ansible package porting guides<porting_guides>` for tips on updating your playbooks to run on newer versions of Ansible. For Ansible 2.10 and later releases, you can install the Ansible package with ``pip``. See :ref:`intro_installation_guide` for details. For older releases, you can download the Ansible release from `<https://releases.ansible.com/ansible/>`_.
Ansible community changelogs
----------------------------
This table links to the changelogs for each major Ansible release. These changelogs contain the dates and significant changes in each minor release.
================================== =================================================
Ansible Community Package Release Status
================================== =================================================
6.0.0 In development (unreleased)
`5.x Changelogs`_ Current
`4.x Changelogs`_ End of life after 4.10
`3.x Changelogs`_ Unmaintained (end of life)
`2.10 Changelogs`_ Unmaintained (end of life)
================================== =================================================
.. _5.x Changelogs: https://github.com/ansible-community/ansible-build-data/blob/main/5/CHANGELOG-v5.rst
.. _4.x Changelogs: https://github.com/ansible-community/ansible-build-data/blob/main/4/CHANGELOG-v4.rst
.. _3.x Changelogs: https://github.com/ansible-community/ansible-build-data/blob/main/3/CHANGELOG-v3.rst
.. _2.10 Changelogs: https://github.com/ansible-community/ansible-build-data/blob/main/2.10/CHANGELOG-v2.10.rst
ansible-core release cycle
--------------------------
``ansible-core`` is developed and released on a flexible release cycle. This cycle can be extended in order to allow for larger changes to be properly implemented and tested before a new release is made available. See :ref:`ansible_core_roadmaps` for upcoming release details.
``ansible-core`` has a graduated maintenance structure that extends to three major releases.
For more information, read about the :ref:`development_and_stable_version_maintenance_workflow` or
see the chart in :ref:`release_schedule` for the degrees to which current releases are maintained.
.. note::
Older, unmaintained versions of ``ansible-core`` can contain unfixed security vulnerabilities (*CVEs*). If you are using a release of ``ansible-core`` that is no longer maintained, we strongly encourage you to upgrade as soon as possible to benefit from the latest features and security fixes. ``ansible-core`` maintenance continues for 3 releases. Thus the latest release receives security and general bug fixes when it is first released, security and critical bug fixes when the next ``ansible-core`` version is released, and **only** security fixes once the follow on to that version is released.
You can refer to the :ref:`core_porting_guides` for tips on updating your playbooks to run on newer versions of ``ansible-core``.
You can install ``ansible-core`` with ``pip``. See :ref:`intro_installation_guide` for details.
.. _release_schedule:
``ansible-core`` changelogs
----------------------------
This table links to the changelogs for each major ``ansible-core``
release. These changelogs contain the dates and
significant changes in each minor release.
================================= ==================================================== ======================
``ansible-core``/``ansible-base`` Status Expected end of life
Release
================================= ==================================================== ======================
devel In development (ansible-core 2.13 unreleased, trunk) TBD
`2.12 ansible-core Changelogs`_ Maintained (security **and** general bug fixes) May 2023
`2.11 ansible-core Changelogs`_ Maintained (security **and** critical bug fixes) Nov 2022
`2.10 ansible-base Changelogs`_ Maintained (security fixes only) May 2022
`2.9 Changelogs`_ Maintained (pre-collections) (security fixes only) May 2022
`2.8 Changelogs`_ Unmaintained (end of life) EOL
`2.7 Changelogs`_ Unmaintained (end of life) EOL
`2.6 Changelogs`_ Unmaintained (end of life) EOL
`2.5 Changelogs`_ Unmaintained (end of life) EOL
<2.5 Unmaintained (end of life) EOL
================================= ==================================================== ======================
.. _2.12 ansible-core Changelogs:
.. _2.12: https://github.com/ansible/ansible/blob/stable-2.12/changelogs/CHANGELOG-v2.12.rst
.. _2.11 ansible-core Changelogs:
.. _2.11: https://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst
.. _2.10 ansible-base Changelogs:
.. _2.10: https://github.com/ansible/ansible/blob/stable-2.10/changelogs/CHANGELOG-v2.10.rst
.. _2.9 Changelogs:
.. _2.9: https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v2.9.rst
.. _2.8 Changelogs:
.. _2.8: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8.rst
.. _2.7 Changelogs: https://github.com/ansible/ansible/blob/stable-2.7/changelogs/CHANGELOG-v2.7.rst
.. _2.6 Changelogs:
.. _2.6: https://github.com/ansible/ansible/blob/stable-2.6/changelogs/CHANGELOG-v2.6.rst
.. _2.5 Changelogs: https://github.com/ansible/ansible/blob/stable-2.5/changelogs/CHANGELOG-v2.5.rst
.. _support_life:
.. _methods:
Preparing for a new release
===========================
.. _release_freezing:
Feature freezes
---------------
During final preparations for a new release, core developers and maintainers focus on improving the release candidate, not on adding or reviewing new features. We may impose a feature freeze.
A feature freeze means that we delay new features and fixes that are not related to the pending release so we can ship the new release as soon as possible.
Release candidates
------------------
We create at least one release candidate before each new major release of Ansible or ``ansible-core``. Release candidates allow the Ansible community to try out new features, test existing playbooks on the release candidate, and report bugs or issues they find.
Ansible and ``ansible-core`` tag the first release candidate (RC1) which is usually scheduled to last five business days. If no major bugs or issues are identified during this period, the release candidate becomes the final release.
If there are major problems with the first candidate, the team and the community fix them and tag a second release candidate (RC2). This second candidate lasts for a shorter duration than the first. If no problems have been reported for an RC2 after two business days, the second release candidate becomes the final release.
If there are major problems in RC2, the cycle begins again with another release candidate and repeats until the maintainers agree that all major problems have been fixed.
.. _development_and_stable_version_maintenance_workflow:
Development and maintenance workflows
=====================================
In between releases, the Ansible community develops new features, maintains existing functionality, and fixes bugs in ``ansible-core`` and in the collections included in the Ansible community package.
Ansible community package workflow
----------------------------------
The Ansible community develops and maintains the features and functionality included in the Ansible community package in Collections repositories, with a workflow that looks like this:
* Developers add new features and bug fixes to the individual Collections, following each Collection's rules on contributing.
* Each new feature and each bug fix includes a changelog fragment describing the work.
* Release engineers create a minor release for the current version every three weeks to ensure that the latest bug fixes are available to users.
* At the end of the development period, the release engineers announce which Collections, and which major version of each included Collection, will be included in the next release of the Ansible community package. New Collections and new major versions may not be added after this, and the work of creating a new release begins.
We generally do not provide fixes for unmaintained releases of the Ansible community package, however, there can sometimes be exceptions for critical issues.
Some Collections are maintained by the Ansible team, some by Partner organizations, and some by community teams. For more information on adding features or fixing bugs in Ansible-maintained Collections, see :ref:`contributing_maintained_collections`.
ansible-core workflow
---------------------
The Ansible community develops and maintains ``ansible-core`` on GitHub_, with a workflow that looks like this:
* Developers add new features and bug fixes to the ``devel`` branch.
* Each new feature and each bug fix includes a changelog fragment describing the work.
* The development team backports bug fixes to one, two, or three stable branches, depending on the severity of the bug. They do not backport new features.
* Release engineers create a minor release for each maintained version every three weeks to ensure that the latest bug fixes are available to users.
* At the end of the development period, the release engineers impose a feature freeze and the work of creating a new release begins.
We generally do not provide fixes for unmaintained releases of ``ansible-core``, however, there can sometimes be exceptions for critical issues.
For more information on adding features or fixing bugs in ``ansible-core`` see :ref:`community_development_process`.
.. _GitHub: https://github.com/ansible/ansible
.. _release_changelogs:
Generating changelogs
----------------------
We generate changelogs based on fragments. When creating new features for existing modules and plugins or fixing bugs, create a changelog fragment describing the change. A changelog entry is not needed for new modules or plugins. Details for those items will be generated from the module documentation.
To add changelog fragments to Collections in the Ansible community package, we recommend the `antsibull-changelog utility <https://github.com/ansible-community/antsibull-changelog/blob/main/docs/changelogs.rst>`_.
To add changelog fragments for new features and bug fixes in ``ansible-core``, see the :ref:`changelog examples and instructions<changelogs_how_to>` in the Community Guide.
Deprecation cycles
==================
Sometimes we remove a feature, normally in favor of a reimplementation that we hope does a better job. To do this we have a deprecation cycle. First we mark a feature as 'deprecated'. This is normally accompanied with warnings to the user as to why we deprecated it, what alternatives they should switch to and when (which version) we are scheduled to remove the feature permanently.
Ansible community package deprecation cycle
--------------------------------------------
Since Ansible is a package of individual collections, the deprecation cycle depends on the collection maintainers. We recommend the collection maintainers deprecate a feature in one Ansible major version and do not remove that feature for one year, or at least until the next major Ansible version. For example, deprecate the feature in 3.1.0, and do not remove the feature until 5.0.0, or 4.0.0 at the earliest. Collections should use semantic versioning, such that the major collection version cannot be changed within an Ansible major version. Thus the removal should not happen before the next major Ansible community package release. This is up to each collection maintainer and cannot be guaranteed.
ansible-core deprecation cycle
-------------------------------
The deprecation cycle in ``ansible-core`` is normally across 4 feature releases (2.x.y, where the x marks a feature release and the y a bugfix release), so the feature is normally removed in the 4th release after we announce the deprecation. For example, something deprecated in 2.9 will be removed in 2.13, assuming we do not jump to 3.x before that point. The tracking is tied to the number of releases, not the release numbering.
.. seealso::
:ref:`community_committer_guidelines`
Guidelines for Ansible core contributors and maintainers
:ref:`testing_strategies`
Testing strategies
:ref:`ansible_community_guide`
Community information and contributing
`Development Mailing List <https://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,669 |
Using core connection plugin's FQCN will crash `ansible-playbook`
|
### Summary
When running `ansible-playbook` using connection plugin from `ansible.builtin` collection using it's FQCN, the whole process crashes.
The crash happens during any configuration lookup using `AnsiblePlugin.get_option()` from said module with following exception:
> Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration
It doesn't matter if the connection plugin is specified via `--connection` argument, `connection:` keyword or `ansible_connection` variable.
However, ad-hoc commands and ansible console does work using FQCN (which is recommended in the docs).
---
**PS.** I'm willing to investigate this further and possibly even make a PR if someone with internal knowledge would point me in the right direction.
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version `5.17.4-arch1-1`)
- Python 3.10.4
- `ansible-core` installed via pip (inside fresh venv)
### Steps to Reproduce
Example playbook using `connection` keyword:
```yaml
- hosts: all # Do not use implicit `localhost` as `local` plugin won't make configuration lookup in default configuration
connection: ansible.builtin.ssh
tasks:
- ping:
```
Quick and dirty one-liner using `--connection` CLI argument instead:
**PS!** Shell must support process substitution for the inline playbook (ie. bash or zsh)
```bash
ansible-playbook -i localhost, --connection ansible.builtin.ssh <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
```
> Note: Although it fails at fact gathering, it is no factor - it will fail without it as well
### Expected Results
Excpected the command not to crash during connection plugin config lookup
### Actual Results
```console
$ ansible-playbook -i localhost, --connection ansible.builtin.ssh -vvvv <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible-playbook
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
Positional arguments: /proc/self/fd/13
verbosity: 4
connection: ansible.builtin.ssh
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in /proc/self/fd/13
PLAY [all] ***********************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:1
The full traceback is:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 425, in get_config_value
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 558, in get_config_value_and_origin
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 600, in _execute
result = self._handler.run(task_vars=variables)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/gather_facts.py", line 100, in run
res = self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 973, in _execute_module
self._make_tmp_path()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 390, in _make_tmp_path
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 856, in _remote_expand_user
data = self._low_level_execute_command(cmd, sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 1253, in _low_level_execute_command
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/connection/ssh.py", line 1276, in exec_command
self.host = self.get_option('host') or self._play_context.remote_addr
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77669
|
https://github.com/ansible/ansible/pull/77659
|
fb2f080b42c16e28262ba6b9c295332e09e2bfad
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
| 2022-04-27T22:32:29Z |
python
| 2022-04-28T19:14:33Z |
changelogs/fragments/config_load_by_name.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,669 |
Using core connection plugin's FQCN will crash `ansible-playbook`
|
### Summary
When running `ansible-playbook` using connection plugin from `ansible.builtin` collection using it's FQCN, the whole process crashes.
The crash happens during any configuration lookup using `AnsiblePlugin.get_option()` from said module with following exception:
> Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration
It doesn't matter if the connection plugin is specified via `--connection` argument, `connection:` keyword or `ansible_connection` variable.
However, ad-hoc commands and ansible console does work using FQCN (which is recommended in the docs).
---
**PS.** I'm willing to investigate this further and possibly even make a PR if someone with internal knowledge would point me in the right direction.
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version `5.17.4-arch1-1`)
- Python 3.10.4
- `ansible-core` installed via pip (inside fresh venv)
### Steps to Reproduce
Example playbook using `connection` keyword:
```yaml
- hosts: all # Do not use implicit `localhost` as `local` plugin won't make configuration lookup in default configuration
connection: ansible.builtin.ssh
tasks:
- ping:
```
Quick and dirty one-liner using `--connection` CLI argument instead:
**PS!** Shell must support process substitution for the inline playbook (ie. bash or zsh)
```bash
ansible-playbook -i localhost, --connection ansible.builtin.ssh <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
```
> Note: Although it fails at fact gathering, it is no factor - it will fail without it as well
### Expected Results
Excpected the command not to crash during connection plugin config lookup
### Actual Results
```console
$ ansible-playbook -i localhost, --connection ansible.builtin.ssh -vvvv <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible-playbook
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
Positional arguments: /proc/self/fd/13
verbosity: 4
connection: ansible.builtin.ssh
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in /proc/self/fd/13
PLAY [all] ***********************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:1
The full traceback is:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 425, in get_config_value
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 558, in get_config_value_and_origin
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 600, in _execute
result = self._handler.run(task_vars=variables)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/gather_facts.py", line 100, in run
res = self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 973, in _execute_module
self._make_tmp_path()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 390, in _make_tmp_path
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 856, in _remote_expand_user
data = self._low_level_execute_command(cmd, sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 1253, in _low_level_execute_command
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/connection/ssh.py", line 1276, in exec_command
self.host = self.get_option('host') or self._play_context.remote_addr
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77669
|
https://github.com/ansible/ansible/pull/77659
|
fb2f080b42c16e28262ba6b9c295332e09e2bfad
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
| 2022-04-27T22:32:29Z |
python
| 2022-04-28T19:14:33Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import sys
import warnings
from collections import defaultdict, namedtuple
from ansible import constants as C
from ansible.errors import AnsibleError, AnsiblePluginCircularRedirect, AnsiblePluginRemovedError, AnsibleCollectionUnsupportedVersionError
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.compat.importlib import import_module
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder, _get_collection_metadata
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments
from ansible import __version__ as ansible_version
# TODO: take the packaging dep, or vendor SpecifierSet?
try:
from packaging.specifiers import SpecifierSet
from packaging.version import Version
except ImportError:
SpecifierSet = None # type: ignore[misc]
Version = None # type: ignore[misc]
import importlib.util
display = Display()
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = os.path.expanduser(to_bytes(path, errors='surrogate_or_strict'))
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginPathContext(object):
def __init__(self, path, internal):
self.path = path
self.internal = internal
class PluginLoadContext(object):
def __init__(self):
self.original_name = None
self.redirect_list = []
self.error_list = []
self.import_error_list = []
self.load_attempts = []
self.pending_redirect = None
self.exit_reason = None
self.plugin_resolved_path = None
self.plugin_resolved_name = None
self.plugin_resolved_collection = None # empty string for resolved plugins from user-supplied paths
self.deprecated = False
self.removal_date = None
self.removal_version = None
self.deprecation_warnings = []
self.resolved = False
self._resolved_fqcn = None
@property
def resolved_fqcn(self):
if not self.resolved:
return
if not self._resolved_fqcn:
final_plugin = self.redirect_list[-1]
if AnsibleCollectionRef.is_valid_fqcr(final_plugin) and final_plugin.startswith('ansible.legacy.'):
final_plugin = final_plugin.split('ansible.legacy.')[-1]
if self.plugin_resolved_collection and not AnsibleCollectionRef.is_valid_fqcr(final_plugin):
final_plugin = self.plugin_resolved_collection + '.' + final_plugin
self._resolved_fqcn = final_plugin
return self._resolved_fqcn
def record_deprecation(self, name, deprecation, collection_name):
if not deprecation:
return self
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
warning_text = deprecation.get('warning_text', None) or ''
removal_date = deprecation.get('removal_date', None)
removal_version = deprecation.get('removal_version', None)
# If both removal_date and removal_version are specified, use removal_date
if removal_date is not None:
removal_version = None
warning_text = '{0} has been deprecated.{1}{2}'.format(name, ' ' if warning_text else '', warning_text)
display.deprecated(warning_text, date=removal_date, version=removal_version, collection_name=collection_name)
self.deprecated = True
if removal_date:
self.removal_date = removal_date
if removal_version:
self.removal_version = removal_version
self.deprecation_warnings.append(warning_text)
return self
def resolve(self, resolved_name, resolved_path, resolved_collection, exit_reason):
self.pending_redirect = None
self.plugin_resolved_name = resolved_name
self.plugin_resolved_path = resolved_path
self.plugin_resolved_collection = resolved_collection
self.exit_reason = exit_reason
self.resolved = True
return self
def redirect(self, redirect_name):
self.pending_redirect = redirect_name
self.exit_reason = 'pending redirect resolution from {0} to {1}'.format(self.original_name, redirect_name)
self.resolved = False
return self
def nope(self, exit_reason):
self.pending_redirect = None
self.exit_reason = exit_reason
self.resolved = False
return self
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
def __repr__(self):
return 'PluginLoader(type={0})'.format(AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir))
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = to_text(os.path.dirname(m.__file__), errors='surrogate_or_strict')
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths_with_context(self, subdirs=True):
''' Return a list of PluginPathContext objects to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = [PluginPathContext(p, False) for p in self._extra_dirs]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.abspath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
c = to_text(c, errors='surrogate_or_strict')
if os.path.isdir(c) and c not in ret:
ret.append(PluginPathContext(c, False))
path = to_text(path, errors='surrogate_or_strict')
if path not in ret:
ret.append(PluginPathContext(path, False))
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend([PluginPathContext(p, True) for p in self._get_package_paths(subdirs=subdirs)])
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
#
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
ret.sort(key=lambda p: p.path.endswith('/windows'))
# cache and return the result
self._paths = ret
return ret
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
paths_with_context = self._get_paths_with_context(subdirs=subdirs)
return [path_with_context.path for path_with_context in paths_with_context]
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS:
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader, is_module=(type_name == 'module'))
if dstring and 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _query_collection_routing_meta(self, acr, plugin_type, extension=None):
collection_pkg = import_module(acr.n_python_collection_package_name)
if not collection_pkg:
return None
# FIXME: shouldn't need this...
try:
# force any type-specific metadata postprocessing to occur
import_module(acr.n_python_collection_package_name + '.plugins.{0}'.format(plugin_type))
except ImportError:
pass
# this will be created by the collection PEP302 loader
collection_meta = getattr(collection_pkg, '_collection_meta', None)
if not collection_meta:
return None
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
if acr.subdirs:
subdir_qualified_resource = '.'.join([acr.subdirs, acr.resource])
else:
subdir_qualified_resource = acr.resource
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource + extension, None)
if not entry:
# try for extension-agnostic entry
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource, None)
return entry
def _find_fq_plugin(self, fq_name, extension, plugin_load_context, ignore_deprecated=False):
"""Search builtin paths to find a plugin. No external paths are searched,
meaning plugins inside roles inside collections will be ignored.
"""
plugin_load_context.resolved = False
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
# check collection metadata to see if any special handling is required for this plugin
routing_metadata = self._query_collection_routing_meta(acr, plugin_type, extension=extension)
# TODO: factor this into a wrapper method
if routing_metadata:
deprecation = routing_metadata.get('deprecation', None)
# this will no-op if there's no deprecation metadata for this plugin
if not ignore_deprecated:
plugin_load_context.record_deprecation(fq_name, deprecation, acr.collection)
tombstone = routing_metadata.get('tombstone', None)
# FIXME: clean up text gen
if tombstone:
removal_date = tombstone.get('removal_date')
removal_version = tombstone.get('removal_version')
warning_text = tombstone.get('warning_text') or ''
warning_text = '{0} has been removed.{1}{2}'.format(fq_name, ' ' if warning_text else '', warning_text)
removed_msg = display.get_deprecation_message(msg=warning_text, version=removal_version,
date=removal_date, removed=True,
collection_name=acr.collection)
plugin_load_context.removal_date = removal_date
plugin_load_context.removal_version = removal_version
plugin_load_context.resolved = True
plugin_load_context.exit_reason = removed_msg
raise AnsiblePluginRemovedError(removed_msg, plugin_load_context=plugin_load_context)
redirect = routing_metadata.get('redirect', None)
if redirect:
# FIXME: remove once this is covered in debug or whatever
display.vv("redirecting (type: {0}) {1} to {2}".format(plugin_type, fq_name, redirect))
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
if fq_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(fq_name)
return plugin_load_context.redirect(redirect)
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
return plugin_load_context.nope('Python package {0} not found'.format(acr.n_python_package_name))
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return plugin_load_context.resolve(
full_name, to_text(n_resource_path), acr.collection, 'found exact match for {0} in {1}'.format(full_name, acr.collection))
if extension:
# the request was extension-specific, don't try for an extensionless match
return plugin_load_context.nope('no match for {0} in {1}'.format(to_text(n_resource), acr.collection))
# look for any matching extension in the package location (sans filter)
found_files = [f
for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*')
if os.path.isfile(f) and not f.endswith(C.MODULE_IGNORE_EXTS)]
if not found_files:
return plugin_load_context.nope('failed fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
if len(found_files) > 1:
# TODO: warn?
pass
return plugin_load_context.resolve(
full_name, to_text(found_files[0]), acr.collection, 'found fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
result = self.find_plugin_with_context(name, mod_type, ignore_deprecated, check_aliases, collection_list)
if result.resolved and result.plugin_resolved_path:
return result.plugin_resolved_path
return None
def find_plugin_with_context(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name, returning contextual info about the load, recursively resolving redirection '''
plugin_load_context = PluginLoadContext()
plugin_load_context.original_name = name
while True:
result = self._resolve_plugin_step(name, mod_type, ignore_deprecated, check_aliases, collection_list, plugin_load_context=plugin_load_context)
if result.pending_redirect:
if result.pending_redirect in result.redirect_list:
raise AnsiblePluginCircularRedirect('plugin redirect loop resolving {0} (path: {1})'.format(result.original_name, result.redirect_list))
name = result.pending_redirect
result.pending_redirect = None
plugin_load_context = result
else:
break
# TODO: smuggle these to the controller when we're in a worker, reduce noise from normal things like missing plugin packages during collection search
if plugin_load_context.error_list:
display.warning("errors were encountered during the plugin load for {0}:\n{1}".format(name, plugin_load_context.error_list))
# TODO: display/return import_error_list? Only useful for forensics...
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# for dw in plugin_load_context.deprecation_warnings:
# # TODO: need to smuggle these to the controller if we're in a worker context
# display.warning('[DEPRECATION WARNING] ' + dw)
return plugin_load_context
# FIXME: name bikeshed
def _resolve_plugin_step(self, name, mod_type='', ignore_deprecated=False,
check_aliases=False, collection_list=None, plugin_load_context=PluginLoadContext()):
if not plugin_load_context:
raise ValueError('A PluginLoadContext is required')
plugin_load_context.redirect_list.append(name)
plugin_load_context.resolved = False
global _PLUGIN_FILTERS
if name in _PLUGIN_FILTERS[self.package]:
plugin_load_context.exit_reason = '{0} matched a defined plugin filter'.format(name)
return plugin_load_context
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
for candidate_name in candidates:
try:
plugin_load_context.load_attempts.append(candidate_name)
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
plugin_load_context = self._find_plugin_legacy(name.replace('ansible.legacy.', '', 1),
plugin_load_context, ignore_deprecated, check_aliases, suffix)
else:
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
plugin_load_context = self._find_fq_plugin(candidate_name, suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# collections: ['ns.coll']
if plugin_load_context.resolved and candidate_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(candidate_name)
if plugin_load_context.resolved or plugin_load_context.pending_redirect: # if we got an answer or need to chase down a redirect, return
return plugin_load_context
except (AnsiblePluginRemovedError, AnsiblePluginCircularRedirect, AnsibleCollectionUnsupportedVersionError):
# these are generally fatal, let them fly
raise
except ImportError as ie:
plugin_load_context.import_error_list.append(ie)
except Exception as ex:
# FIXME: keep actual errors, not just assembled messages
plugin_load_context.error_list.append(to_native(ex))
if plugin_load_context.error_list:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(plugin_load_context.error_list)))
plugin_load_context.exit_reason = 'no matches found for {0}'.format(name)
return plugin_load_context
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, plugin_load_context, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, plugin_load_context, ignore_deprecated=False, check_aliases=False, suffix=None):
"""Search library and various *_plugins paths in order to find the file.
This was behavior prior to the existence of collections.
"""
plugin_load_context.resolved = False
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator.
# We can use _get_paths_with_context() since add_directory() forces a cache refresh.
for path_with_context in (p for p in self._get_paths_with_context() if p.path not in self._searched_paths and os.path.isdir(to_bytes(p.path))):
path = path_with_context.path
b_path = to_bytes(path)
display.debug('trying %s' % path)
plugin_load_context.load_attempts.append(path)
internal = path_with_context.internal
try:
full_paths = (os.path.join(b_path, f) for f in os.listdir(b_path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (to_native(f) for f in full_paths if os.path.isfile(f) and not f.endswith(b'__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.MODULE_IGNORE_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# everything downstream expects unicode
full_path = to_text(full_path, errors='surrogate_or_strict')
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = PluginPathContext(full_path, internal)
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = PluginPathContext(full_path, internal)
self._searched_paths.add(path)
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
path_with_context = pull_cache[alias_name]
if not ignore_deprecated and not os.path.islink(path_with_context.path):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = alias_name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context.resolved = True
return plugin_load_context
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
candidate_fqcr = 'ansible.builtin.{0}'.format(name)
if '.' not in name and AnsibleCollectionRef.is_valid_fqcr(candidate_fqcr):
return self._find_fq_plugin(fq_name=candidate_fqcr, extension=suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
return plugin_load_context.nope('{0} is not eligible for last-chance resolution'.format(name))
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules[full_name] = module
return module
def _update_object(self, obj, name, path, redirected_names=None):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
setattr(obj, '_redirected_names', redirected_names or [])
def get(self, name, *args, **kwargs):
return self.get_with_context(name, *args, **kwargs).object
def get_with_context(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
plugin_load_context = self.find_plugin_with_context(name, collection_list=collection_list)
if not plugin_load_context.resolved or not plugin_load_context.plugin_resolved_path:
# FIXME: this is probably an error (eg removed plugin)
return get_with_context_result(None, plugin_load_context)
name = plugin_load_context.plugin_resolved_name
path = plugin_load_context.plugin_resolved_path
redirected_names = plugin_load_context.redirect_list or []
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
self._load_config_defs(name, self._module_cache[path], path)
found_in_cache = False
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return get_with_context_result(None, plugin_load_context)
if not issubclass(obj, plugin_class):
return get_with_context_result(None, plugin_load_context)
# FIXME: update this to use the load context
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
instance = object.__new__(obj)
self._update_object(instance, name, path, redirected_names)
obj.__init__(instance, *args, **kwargs)
obj = instance
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class. The found plugin file does not
# fully implement the defined interface.
return get_with_context_result(None, plugin_load_context)
raise
self._update_object(obj, name, path, redirected_names)
return get_with_context_result(obj, plugin_load_context)
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
global _PLUGIN_FILTERS
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
for i in self._get_paths():
all_matches.extend(glob.glob(to_native(os.path.join(i, "*.py"))))
loaded_modules = set()
for path in sorted(all_matches, key=os.path.basename):
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename == '__init__' or basename in _PLUGIN_FILTERS[self.package]:
# either empty or ignored by the module blocklist
continue
if basename == 'base' and self.package == 'ansible.plugins.cache':
# cache has legacy 'base.py' file, which is wrapper for __init__.py
continue
if dedupe and basename in loaded_modules:
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
try:
if self.subdir in ('filter_plugins', 'test_plugins'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
module = self._load_module_source(full_name, path)
self._load_config_defs(basename, module, path)
except Exception as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
try:
obj = getattr(self._module_cache[path], self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
self._update_object(obj, basename, path)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
The way the calling code is setup, we need to do a few things differently in the all() method
We can't use the base class version because of file == plugin assumptions and dedupe logic
"""
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' this is really 'find plugin file' '''
return super(Jinja2Loader, self).find_plugin(name, mod_type=mod_type, ignore_deprecated=ignore_deprecated, check_aliases=check_aliases,
collection_list=collection_list)
def get(self, name, *args, **kwargs):
if '.' in name: # NOTE: this is wrong way to detect collection, see note above for example
return super(Jinja2Loader, self).get(name, *args, **kwargs)
# Nothing is currently using this method
raise AnsibleError('No code should call "get" for Jinja2Loaders (Not implemented) for non collection use')
def all(self, *args, **kwargs):
"""
Differences with :meth:`PluginLoader.all`:
* Unlike other plugin types, file != plugin, a file can contain multiple plugins (of same type).
This is why we do not deduplicate ansible file names at this point, we mostly care about
the names of the actual jinja2 plugins which are inside of our files.
* We reverse the order of the list of files compared to other PluginLoaders. This is
because of how calling code chooses to sync the plugins from the list. It adds all the
Jinja2 plugins from one of our Ansible files into a dict. Then it adds the Jinja2
plugins from the next Ansible file, overwriting any Jinja2 plugins that had the same
name. This is an encapsulation violation (the PluginLoader should not know about what
calling code does with the data) but we're pushing the common code here. We'll fix
this in the future by moving more of the common code into this PluginLoader.
* We return a list. We could iterate the list instead but that's extra work for no gain because
the API receiving this doesn't care. It just needs an iterable
* This method will NOT fetch collection plugins, only those that would be expected under 'ansible.legacy'.
"""
# We don't deduplicate ansible file names.
# Instead, calling code deduplicates jinja2 plugin names when loading each file.
kwargs['_dedupe'] = False
# TODO: move this to initialization and extract/dedupe plugin names in loader and offset this from
# caller. It would have to cache/refresh on add_directory to reevaluate plugin list and dedupe.
# Another option is to always prepend 'ansible.legac'y and force the collection path to
# load/find plugins, just need to check compatibility of that approach.
# This would also enable get/find_plugin for these type of plugins.
# We have to instantiate a list of all files so that we can reverse the list.
# We reverse it so that calling code will deduplicate this correctly.
files = list(super(Jinja2Loader, self).all(*args, **kwargs))
files .reverse()
return files
def _load_plugin_filter():
filters = defaultdict(frozenset)
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
if version == u'1.0':
# Modules and action plugins share the same blacklist since the difference between the
# two isn't visible to the users
try:
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_blacklist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is blacklisted.
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module blacklist file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the blacklist.'.format(to_native(filter_cfg)))
return filters
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
def _on_collection_load_handler(collection_name, collection_path):
display.vvvv(to_text('Loading collection {0} from {1}'.format(collection_name, collection_path)))
collection_meta = _get_collection_metadata(collection_name)
try:
if not _does_collection_support_ansible_version(collection_meta.get('requires_ansible', ''), ansible_version):
mismatch_behavior = C.config.get_config_value('COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH')
message = 'Collection {0} does not support Ansible version {1}'.format(collection_name, ansible_version)
if mismatch_behavior == 'warning':
display.warning(message)
elif mismatch_behavior == 'error':
raise AnsibleCollectionUnsupportedVersionError(message)
except AnsibleError:
raise
except Exception as ex:
display.warning('Error parsing collection metadata requires_ansible value from collection {0}: {1}'.format(collection_name, ex))
def _does_collection_support_ansible_version(requirement_string, ansible_version):
if not requirement_string:
return True
if not SpecifierSet:
display.warning('packaging Python module unavailable; unable to validate collection Ansible version requirements')
return True
ss = SpecifierSet(requirement_string)
# ignore prerelease/postrelease/beta/dev flags for simplicity
base_ansible_version = Version(ansible_version).base_version
return ss.contains(base_ansible_version)
def _configure_collection_loader():
if AnsibleCollectionConfig.collection_finder:
# this must be a Python warning so that it can be filtered out by the import sanity test
warnings.warn('AnsibleCollectionFinder has already been configured')
return
finder = _AnsibleCollectionFinder(C.COLLECTIONS_PATHS, C.COLLECTIONS_SCAN_SYS_PATH)
finder._install()
# this should succeed now
AnsibleCollectionConfig.on_collection_load += _on_collection_load_handler
# TODO: All of the following is initialization code It should be moved inside of an initialization
# function which is called at some point early in the ansible and ansible-playbook CLI startup.
_PLUGIN_FILTERS = _load_plugin_filter()
_configure_collection_loader()
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins'
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,669 |
Using core connection plugin's FQCN will crash `ansible-playbook`
|
### Summary
When running `ansible-playbook` using connection plugin from `ansible.builtin` collection using it's FQCN, the whole process crashes.
The crash happens during any configuration lookup using `AnsiblePlugin.get_option()` from said module with following exception:
> Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration
It doesn't matter if the connection plugin is specified via `--connection` argument, `connection:` keyword or `ansible_connection` variable.
However, ad-hoc commands and ansible console does work using FQCN (which is recommended in the docs).
---
**PS.** I'm willing to investigate this further and possibly even make a PR if someone with internal knowledge would point me in the right direction.
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version `5.17.4-arch1-1`)
- Python 3.10.4
- `ansible-core` installed via pip (inside fresh venv)
### Steps to Reproduce
Example playbook using `connection` keyword:
```yaml
- hosts: all # Do not use implicit `localhost` as `local` plugin won't make configuration lookup in default configuration
connection: ansible.builtin.ssh
tasks:
- ping:
```
Quick and dirty one-liner using `--connection` CLI argument instead:
**PS!** Shell must support process substitution for the inline playbook (ie. bash or zsh)
```bash
ansible-playbook -i localhost, --connection ansible.builtin.ssh <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
```
> Note: Although it fails at fact gathering, it is no factor - it will fail without it as well
### Expected Results
Excpected the command not to crash during connection plugin config lookup
### Actual Results
```console
$ ansible-playbook -i localhost, --connection ansible.builtin.ssh -vvvv <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible-playbook
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
Positional arguments: /proc/self/fd/13
verbosity: 4
connection: ansible.builtin.ssh
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in /proc/self/fd/13
PLAY [all] ***********************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:1
The full traceback is:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 425, in get_config_value
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 558, in get_config_value_and_origin
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 600, in _execute
result = self._handler.run(task_vars=variables)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/gather_facts.py", line 100, in run
res = self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 973, in _execute_module
self._make_tmp_path()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 390, in _make_tmp_path
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 856, in _remote_expand_user
data = self._low_level_execute_command(cmd, sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 1253, in _low_level_execute_command
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/connection/ssh.py", line 1276, in exec_command
self.host = self.get_option('host') or self._play_context.remote_addr
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77669
|
https://github.com/ansible/ansible/pull/77659
|
fb2f080b42c16e28262ba6b9c295332e09e2bfad
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
| 2022-04-27T22:32:29Z |
python
| 2022-04-28T19:14:33Z |
test/integration/targets/plugin_loader/runme.sh
|
#!/usr/bin/env bash
set -ux
cleanup() {
unlink normal/library/_symlink.py
}
pushd normal/library
ln -s _underscore.py _symlink.py
popd
trap 'cleanup' EXIT
# check normal execution
for myplay in normal/*.yml
do
ansible-playbook "${myplay}" -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run ${myplay} normally"
exit 1
fi
done
# check overrides
for myplay in override/*.yml
do
ansible-playbook "${myplay}" -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run ${myplay} override"
exit 1
fi
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,669 |
Using core connection plugin's FQCN will crash `ansible-playbook`
|
### Summary
When running `ansible-playbook` using connection plugin from `ansible.builtin` collection using it's FQCN, the whole process crashes.
The crash happens during any configuration lookup using `AnsiblePlugin.get_option()` from said module with following exception:
> Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration
It doesn't matter if the connection plugin is specified via `--connection` argument, `connection:` keyword or `ansible_connection` variable.
However, ad-hoc commands and ansible console does work using FQCN (which is recommended in the docs).
---
**PS.** I'm willing to investigate this further and possibly even make a PR if someone with internal knowledge would point me in the right direction.
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version `5.17.4-arch1-1`)
- Python 3.10.4
- `ansible-core` installed via pip (inside fresh venv)
### Steps to Reproduce
Example playbook using `connection` keyword:
```yaml
- hosts: all # Do not use implicit `localhost` as `local` plugin won't make configuration lookup in default configuration
connection: ansible.builtin.ssh
tasks:
- ping:
```
Quick and dirty one-liner using `--connection` CLI argument instead:
**PS!** Shell must support process substitution for the inline playbook (ie. bash or zsh)
```bash
ansible-playbook -i localhost, --connection ansible.builtin.ssh <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
```
> Note: Although it fails at fact gathering, it is no factor - it will fail without it as well
### Expected Results
Excpected the command not to crash during connection plugin config lookup
### Actual Results
```console
$ ansible-playbook -i localhost, --connection ansible.builtin.ssh -vvvv <(echo "[{'hosts':'all','tasks':[{'action':'ping'}]}]")
/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-playbook-fqcn-issue/venv/bin/ansible-playbook
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0]
jinja version = 3.1.1
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
Positional arguments: /proc/self/fd/13
verbosity: 4
connection: ansible.builtin.ssh
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in /proc/self/fd/13
PLAY [all] ***********************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:1
The full traceback is:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 58, in get_option
option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 425, in get_config_value
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/config/manager.py", line 558, in get_config_value_and_origin
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
ansible.errors.AnsibleError: Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/executor/task_executor.py", line 600, in _execute
result = self._handler.run(task_vars=variables)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/gather_facts.py", line 100, in run
res = self._execute_module(module_name=fact_module, module_args=mod_args, task_vars=task_vars, wrap_async=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 973, in _execute_module
self._make_tmp_path()
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 390, in _make_tmp_path
tmpdir = self._remote_expand_user(self.get_shell_option('remote_tmp', default='~/.ansible/tmp'), sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 856, in _remote_expand_user
data = self._low_level_execute_command(cmd, sudoable=False)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/action/__init__.py", line 1253, in _low_level_execute_command
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/connection/ssh.py", line 1276, in exec_command
self.host = self.get_option('host') or self._play_context.remote_addr
File "/home/kristian/playground/ansible-playbook-fqcn-issue/venv/lib/python3.10/site-packages/ansible/plugins/__init__.py", line 60, in get_option
raise KeyError(to_native(e))
KeyError: 'Requested entry (plugin_type: connection plugin: ansible_collections.ansible.builtin.plugins.connection.ssh setting: host ) was not defined in configuration.'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77669
|
https://github.com/ansible/ansible/pull/77659
|
fb2f080b42c16e28262ba6b9c295332e09e2bfad
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
| 2022-04-27T22:32:29Z |
python
| 2022-04-28T19:14:33Z |
test/integration/targets/plugin_loader/use_coll_name.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,644 |
ansible-config does not dump config in parseable format
|
##### SUMMARY
While `ansible-config dump` is a very useful tool for determining the configuration used by Ansible at runtime, it does have a major flaw: its output is not machine parseable.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-config
##### ADDITIONAL INFORMATION
ansible-config should have an option to dump config in a machine parseable format like JSON or YAML, so it can be consumed by other tools.
|
https://github.com/ansible/ansible/issues/73644
|
https://github.com/ansible/ansible/pull/77447
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
|
a12e0a0e874c6c0d18a1a2d83dcb106d207136af
| 2021-02-18T10:28:01Z |
python
| 2022-04-28T23:58:58Z |
changelogs/fragments/config_formats.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,644 |
ansible-config does not dump config in parseable format
|
##### SUMMARY
While `ansible-config dump` is a very useful tool for determining the configuration used by Ansible at runtime, it does have a major flaw: its output is not machine parseable.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-config
##### ADDITIONAL INFORMATION
ansible-config should have an option to dump config in a machine parseable format like JSON or YAML, so it can be consumed by other tools.
|
https://github.com/ansible/ansible/issues/73644
|
https://github.com/ansible/ansible/pull/77447
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
|
a12e0a0e874c6c0d18a1a2d83dcb106d207136af
| 2021-02-18T10:28:01Z |
python
| 2022-04-28T23:58:58Z |
lib/ansible/cli/config.py
|
#!/usr/bin/env python
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import os
import shlex
import subprocess
from collections.abc import Mapping
import yaml
from ansible import context
import ansible.plugins.loader as plugin_loader
from ansible import constants as C
from ansible.cli.arguments import option_helpers as opt_help
from ansible.config.manager import ConfigManager, Setting
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.six import string_types
from ansible.parsing.quoting import is_quoted
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.utils.color import stringc
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath
display = Display()
def get_constants():
''' helper method to ensure we can template based on existing constants '''
if not hasattr(get_constants, 'cvars'):
get_constants.cvars = {k: getattr(C, k) for k in dir(C) if not k.startswith('__')}
return get_constants.cvars
class ConfigCLI(CLI):
""" Config command line class """
name = 'ansible-config'
def __init__(self, args, callback=None):
self.config_file = None
self.config = None
super(ConfigCLI, self).__init__(args, callback)
def init_parser(self):
super(ConfigCLI, self).init_parser(
desc="View ansible configuration.",
)
common = opt_help.argparse.ArgumentParser(add_help=False)
opt_help.add_verbosity_options(common)
common.add_argument('-c', '--config', dest='config_file',
help="path to configuration file, defaults to first file found in precedence.")
common.add_argument("-t", "--type", action="store", default='base', dest='type', choices=['all', 'base'] + list(C.CONFIGURABLE_PLUGINS),
help="Filter down to a specific plugin type.")
common.add_argument('args', help='Specific plugin to target, requires type of plugin to be set', nargs='*')
subparsers = self.parser.add_subparsers(dest='action')
subparsers.required = True
list_parser = subparsers.add_parser('list', help='Print all config options', parents=[common])
list_parser.set_defaults(func=self.execute_list)
dump_parser = subparsers.add_parser('dump', help='Dump configuration', parents=[common])
dump_parser.set_defaults(func=self.execute_dump)
dump_parser.add_argument('--only-changed', '--changed-only', dest='only_changed', action='store_true',
help="Only show configurations that have changed from the default")
view_parser = subparsers.add_parser('view', help='View configuration file', parents=[common])
view_parser.set_defaults(func=self.execute_view)
init_parser = subparsers.add_parser('init', help='Create initial configuration', parents=[common])
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--format', '-f', dest='format', action='store', choices=['ini', 'env', 'vars'], default='ini',
help='Output format for init')
init_parser.add_argument('--disabled', dest='commented', action='store_true', default=False,
help='Prefixes all entries with a comment character to disable them')
# search_parser = subparsers.add_parser('find', help='Search configuration')
# search_parser.set_defaults(func=self.execute_search)
# search_parser.add_argument('args', help='Search term', metavar='<search term>')
def post_process_args(self, options):
options = super(ConfigCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(ConfigCLI, self).run()
if context.CLIARGS['config_file']:
self.config_file = unfrackpath(context.CLIARGS['config_file'], follow=False)
b_config = to_bytes(self.config_file)
if os.path.exists(b_config) and os.access(b_config, os.R_OK):
self.config = ConfigManager(self.config_file)
else:
raise AnsibleOptionsError('The provided configuration file is missing or not accessible: %s' % to_native(self.config_file))
else:
self.config = C.config
self.config_file = self.config._config_file
if self.config_file:
try:
if not os.path.exists(self.config_file):
raise AnsibleOptionsError("%s does not exist or is not accessible" % (self.config_file))
elif not os.path.isfile(self.config_file):
raise AnsibleOptionsError("%s is not a valid file" % (self.config_file))
os.environ['ANSIBLE_CONFIG'] = to_native(self.config_file)
except Exception:
if context.CLIARGS['action'] in ['view']:
raise
elif context.CLIARGS['action'] in ['edit', 'update']:
display.warning("File does not exist, used empty file: %s" % self.config_file)
elif context.CLIARGS['action'] == 'view':
raise AnsibleError('Invalid or no config file was supplied')
# run the requested action
context.CLIARGS['func']()
def execute_update(self):
'''
Updates a single setting in the specified ansible.cfg
'''
raise AnsibleError("Option not implemented yet")
# pylint: disable=unreachable
if context.CLIARGS['setting'] is None:
raise AnsibleOptionsError("update option requires a setting to update")
(entry, value) = context.CLIARGS['setting'].split('=')
if '.' in entry:
(section, option) = entry.split('.')
else:
section = 'defaults'
option = entry
subprocess.call([
'ansible',
'-m', 'ini_file',
'localhost',
'-c', 'local',
'-a', '"dest=%s section=%s option=%s value=%s backup=yes"' % (self.config_file, section, option, value)
])
def execute_view(self):
'''
Displays the current config file
'''
try:
with open(self.config_file, 'rb') as f:
self.pager(to_text(f.read(), errors='surrogate_or_strict'))
except Exception as e:
raise AnsibleError("Failed to open config file: %s" % to_native(e))
def execute_edit(self):
'''
Opens ansible.cfg in the default EDITOR
'''
raise AnsibleError("Option not implemented yet")
# pylint: disable=unreachable
try:
editor = shlex.split(os.environ.get('EDITOR', 'vi'))
editor.append(self.config_file)
subprocess.call(editor)
except Exception as e:
raise AnsibleError("Failed to open editor: %s" % to_native(e))
def _list_plugin_settings(self, ptype, plugins=None):
entries = {}
loader = getattr(plugin_loader, '%s_loader' % ptype)
# build list
if plugins:
plugin_cs = []
for plugin in plugins:
p = loader.get(plugin, class_only=True)
if p is None:
display.warning("Skipping %s as we could not find matching plugin" % plugin)
else:
plugin_cs.append(p)
else:
plugin_cs = loader.all(class_only=True)
# iterate over class instances
for plugin in plugin_cs:
finalname = name = plugin._load_name
if name.startswith('_'):
# alias or deprecated
if os.path.islink(plugin._original_path):
continue
else:
finalname = name.replace('_', '', 1) + ' (DEPRECATED)'
entries[finalname] = self.config.get_configuration_definitions(ptype, name)
return entries
def _list_entries_from_args(self):
'''
build a dict with the list requested configs
'''
config_entries = {}
if context.CLIARGS['type'] in ('base', 'all'):
# this dumps main/common configs
config_entries = self.config.get_configuration_definitions(ignore_private=True)
if context.CLIARGS['type'] != 'base':
config_entries['PLUGINS'] = {}
if context.CLIARGS['type'] == 'all':
# now each plugin type
for ptype in C.CONFIGURABLE_PLUGINS:
config_entries['PLUGINS'][ptype.upper()] = self._list_plugin_settings(ptype)
elif context.CLIARGS['type'] != 'base':
config_entries['PLUGINS'][context.CLIARGS['type']] = self._list_plugin_settings(context.CLIARGS['type'], context.CLIARGS['args'])
return config_entries
def execute_list(self):
'''
list and output available configs
'''
config_entries = self._list_entries_from_args()
self.pager(to_text(yaml.dump(config_entries, Dumper=AnsibleDumper), errors='surrogate_or_strict'))
def _get_settings_vars(self, settings, subkey):
data = []
if context.CLIARGS['commented']:
prefix = '#'
else:
prefix = ''
for setting in settings:
if not settings[setting].get('description'):
continue
default = settings[setting].get('default', '')
if subkey == 'env':
stype = settings[setting].get('type', '')
if stype == 'boolean':
if default:
default = '1'
else:
default = '0'
elif default:
if stype == 'list':
if not isinstance(default, string_types):
# python lists are not valid env ones
try:
default = ', '.join(default)
except Exception as e:
# list of other stuff
default = '%s' % to_native(default)
if isinstance(default, string_types) and not is_quoted(default):
default = shlex.quote(default)
elif default is None:
default = ''
if subkey in settings[setting] and settings[setting][subkey]:
entry = settings[setting][subkey][-1]['name']
if isinstance(settings[setting]['description'], string_types):
desc = settings[setting]['description']
else:
desc = '\n#'.join(settings[setting]['description'])
name = settings[setting].get('name', setting)
data.append('# %s(%s): %s' % (name, settings[setting].get('type', 'string'), desc))
# TODO: might need quoting and value coercion depending on type
if subkey == 'env':
data.append('%s%s=%s' % (prefix, entry, default))
elif subkey == 'vars':
data.append(prefix + to_text(yaml.dump({entry: default}, Dumper=AnsibleDumper, default_flow_style=False), errors='surrogate_or_strict'))
data.append('')
return data
def _get_settings_ini(self, settings):
sections = {}
for o in sorted(settings.keys()):
opt = settings[o]
if not isinstance(opt, Mapping):
# recursed into one of the few settings that is a mapping, now hitting it's strings
continue
if not opt.get('description'):
# its a plugin
new_sections = self._get_settings_ini(opt)
for s in new_sections:
if s in sections:
sections[s].extend(new_sections[s])
else:
sections[s] = new_sections[s]
continue
if isinstance(opt['description'], string_types):
desc = '# (%s) %s' % (opt.get('type', 'string'), opt['description'])
else:
desc = "# (%s) " % opt.get('type', 'string')
desc += "\n# ".join(opt['description'])
if 'ini' in opt and opt['ini']:
entry = opt['ini'][-1]
if entry['section'] not in sections:
sections[entry['section']] = []
default = opt.get('default', '')
if opt.get('type', '') == 'list' and not isinstance(default, string_types):
# python lists are not valid ini ones
default = ', '.join(default)
elif default is None:
default = ''
if context.CLIARGS['commented']:
entry['key'] = ';%s' % entry['key']
key = desc + '\n%s=%s' % (entry['key'], default)
sections[entry['section']].append(key)
return sections
def execute_init(self):
data = []
config_entries = self._list_entries_from_args()
plugin_types = config_entries.pop('PLUGINS', None)
if context.CLIARGS['format'] == 'ini':
sections = self._get_settings_ini(config_entries)
if plugin_types:
for ptype in plugin_types:
plugin_sections = self._get_settings_ini(plugin_types[ptype])
for s in plugin_sections:
if s in sections:
sections[s].extend(plugin_sections[s])
else:
sections[s] = plugin_sections[s]
if sections:
for section in sections.keys():
data.append('[%s]' % section)
for key in sections[section]:
data.append(key)
data.append('')
data.append('')
elif context.CLIARGS['format'] in ('env', 'vars'): # TODO: add yaml once that config option is added
data = self._get_settings_vars(config_entries, context.CLIARGS['format'])
if plugin_types:
for ptype in plugin_types:
for plugin in plugin_types[ptype].keys():
data.extend(self._get_settings_vars(plugin_types[ptype][plugin], context.CLIARGS['format']))
self.pager(to_text('\n'.join(data), errors='surrogate_or_strict'))
def _render_settings(self, config):
text = []
for setting in sorted(config):
changed = False
if isinstance(config[setting], Setting):
# proceed normally
if config[setting].origin == 'default':
color = 'green'
elif config[setting].origin == 'REQUIRED':
# should include '_terms', '_input', etc
color = 'red'
else:
color = 'yellow'
changed = True
msg = "%s(%s) = %s" % (setting, config[setting].origin, config[setting].value)
else:
color = 'green'
msg = "%s(%s) = %s" % (setting, 'default', config[setting].get('default'))
if not context.CLIARGS['only_changed'] or changed:
text.append(stringc(msg, color))
return text
def _get_global_configs(self):
config = self.config.get_configuration_definitions(ignore_private=True).copy()
for setting in config.keys():
v, o = C.config.get_config_value_and_origin(setting, cfile=self.config_file, variables=get_constants())
config[setting] = Setting(setting, v, o, None)
return self._render_settings(config)
def _get_plugin_configs(self, ptype, plugins):
# prep loading
loader = getattr(plugin_loader, '%s_loader' % ptype)
# acumulators
text = []
config_entries = {}
# build list
if plugins:
plugin_cs = []
for plugin in plugins:
p = loader.get(plugin, class_only=True)
if p is None:
display.warning("Skipping %s as we could not find matching plugin" % plugin)
else:
plugin_cs.append(loader.get(plugin, class_only=True))
else:
plugin_cs = loader.all(class_only=True)
for plugin in plugin_cs:
# in case of deprecastion they diverge
finalname = name = plugin._load_name
if name.startswith('_'):
if os.path.islink(plugin._original_path):
# skip alias
continue
# deprecated, but use 'nice name'
finalname = name.replace('_', '', 1) + ' (DEPRECATED)'
# default entries per plugin
config_entries[finalname] = self.config.get_configuration_definitions(ptype, name)
try:
# populate config entries by loading plugin
dump = loader.get(name, class_only=True)
except Exception as e:
display.warning('Skipping "%s" %s plugin, as we cannot load plugin to check config due to : %s' % (name, ptype, to_native(e)))
continue
# actually get the values
for setting in config_entries[finalname].keys():
try:
v, o = C.config.get_config_value_and_origin(setting, cfile=self.config_file, plugin_type=ptype, plugin_name=name, variables=get_constants())
except AnsibleError as e:
if to_text(e).startswith('No setting was provided for required configuration'):
v = None
o = 'REQUIRED'
else:
raise e
if v is None and o is None:
# not all cases will be error
o = 'REQUIRED'
config_entries[finalname][setting] = Setting(setting, v, o, None)
# pretty please!
results = self._render_settings(config_entries[finalname])
if results:
# avoid header for empty lists (only changed!)
text.append('\n%s:\n%s' % (finalname, '_' * len(finalname)))
text.extend(results)
return text
def execute_dump(self):
'''
Shows the current settings, merges ansible.cfg if specified
'''
if context.CLIARGS['type'] == 'base':
# deal with base
text = self._get_global_configs()
elif context.CLIARGS['type'] == 'all':
# deal with base
text = self._get_global_configs()
# deal with plugins
for ptype in C.CONFIGURABLE_PLUGINS:
text.append('\n%s:\n%s' % (ptype.upper(), '=' * len(ptype)))
text.extend(self._get_plugin_configs(ptype, context.CLIARGS['args']))
else:
# deal with plugins
text = self._get_plugin_configs(context.CLIARGS['type'], context.CLIARGS['args'])
self.pager(to_text('\n'.join(text), errors='surrogate_or_strict'))
def main(args=None):
ConfigCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,644 |
ansible-config does not dump config in parseable format
|
##### SUMMARY
While `ansible-config dump` is a very useful tool for determining the configuration used by Ansible at runtime, it does have a major flaw: its output is not machine parseable.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-config
##### ADDITIONAL INFORMATION
ansible-config should have an option to dump config in a machine parseable format like JSON or YAML, so it can be consumed by other tools.
|
https://github.com/ansible/ansible/issues/73644
|
https://github.com/ansible/ansible/pull/77447
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
|
a12e0a0e874c6c0d18a1a2d83dcb106d207136af
| 2021-02-18T10:28:01Z |
python
| 2022-04-28T23:58:58Z |
lib/ansible/cli/doc.py
|
#!/usr/bin/env python
# Copyright: (c) 2014, James Tanner <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import pkgutil
import os
import os.path
import re
import textwrap
import traceback
import ansible.plugins.loader as plugin_loader
from collections.abc import Sequence
from pathlib import Path
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.collections.list import list_collection_dirs
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common.json import AnsibleJSONEncoder
from ansible.module_utils.common.yaml import yaml_dump
from ansible.module_utils.compat import importlib
from ansible.module_utils.six import string_types
from ansible.parsing.plugin_docs import read_docstub
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.plugins.list import list_plugins
from ansible.plugins.loader import action_loader, fragment_loader
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.display import Display
from ansible.utils.plugin_docs import (
get_docstring,
get_versioned_doclink,
)
display = Display()
TARGET_OPTIONS = C.DOCUMENTABLE_PLUGINS + ('role', 'keyword',)
PB_OBJECTS = ['Play', 'Role', 'Block', 'Task']
PB_LOADED = {}
SNIPPETS = ['inventory', 'lookup', 'module']
def add_collection_plugins(plugin_list, plugin_type, coll_filter=None):
display.deprecated("add_collection_plugins method, use ansible.plugins.list functions instead.", version='2.17')
plugin_list.update(list_plugins(plugin_type, coll_filter))
def jdump(text):
try:
display.display(json.dumps(text, cls=AnsibleJSONEncoder, sort_keys=True, indent=4))
except TypeError as e:
display.vvv(traceback.format_exc())
raise AnsibleError('We could not convert all the documentation into JSON as there was a conversion issue: %s' % to_native(e))
class PluginNotFound(Exception):
pass
class RoleMixin(object):
"""A mixin containing all methods relevant to role argument specification functionality.
Note: The methods for actual display of role data are not present here.
"""
# Potential locations of the role arg spec file in the meta subdir, with main.yml
# having the lowest priority.
ROLE_ARGSPEC_FILES = ['argument_specs' + e for e in C.YAML_FILENAME_EXTENSIONS] + ["main" + e for e in C.YAML_FILENAME_EXTENSIONS]
def _load_argspec(self, role_name, collection_path=None, role_path=None):
"""Load the role argument spec data from the source file.
:param str role_name: The name of the role for which we want the argspec data.
:param str collection_path: Path to the collection containing the role. This
will be None for standard roles.
:param str role_path: Path to the standard role. This will be None for
collection roles.
We support two files containing the role arg spec data: either meta/main.yml
or meta/argument_spec.yml. The argument_spec.yml file will take precedence
over the meta/main.yml file, if it exists. Data is NOT combined between the
two files.
:returns: A dict of all data underneath the ``argument_specs`` top-level YAML
key in the argspec data file. Empty dict is returned if there is no data.
"""
if collection_path:
meta_path = os.path.join(collection_path, 'roles', role_name, 'meta')
elif role_path:
meta_path = os.path.join(role_path, 'meta')
else:
raise AnsibleError("A path is required to load argument specs for role '%s'" % role_name)
path = None
# Check all potential spec files
for specfile in self.ROLE_ARGSPEC_FILES:
full_path = os.path.join(meta_path, specfile)
if os.path.exists(full_path):
path = full_path
break
if path is None:
return {}
try:
with open(path, 'r') as f:
data = from_yaml(f.read(), file_name=path)
if data is None:
data = {}
return data.get('argument_specs', {})
except (IOError, OSError) as e:
raise AnsibleParserError("An error occurred while trying to read the file '%s': %s" % (path, to_native(e)), orig_exc=e)
def _find_all_normal_roles(self, role_paths, name_filters=None):
"""Find all non-collection roles that have an argument spec file.
Note that argument specs do not actually need to exist within the spec file.
:param role_paths: A tuple of one or more role paths. When a role with the same name
is found in multiple paths, only the first-found role is returned.
:param name_filters: A tuple of one or more role names used to filter the results.
:returns: A set of tuples consisting of: role name, full role path
"""
found = set()
found_names = set()
for path in role_paths:
if not os.path.isdir(path):
continue
# Check each subdir for an argument spec file
for entry in os.listdir(path):
role_path = os.path.join(path, entry)
# Check all potential spec files
for specfile in self.ROLE_ARGSPEC_FILES:
full_path = os.path.join(role_path, 'meta', specfile)
if os.path.exists(full_path):
if name_filters is None or entry in name_filters:
if entry not in found_names:
found.add((entry, role_path))
found_names.add(entry)
# select first-found
break
return found
def _find_all_collection_roles(self, name_filters=None, collection_filter=None):
"""Find all collection roles with an argument spec file.
Note that argument specs do not actually need to exist within the spec file.
:param name_filters: A tuple of one or more role names used to filter the results. These
might be fully qualified with the collection name (e.g., community.general.roleA)
or not (e.g., roleA).
:param collection_filter: A string containing the FQCN of a collection which will be
used to limit results. This filter will take precedence over the name_filters.
:returns: A set of tuples consisting of: role name, collection name, collection path
"""
found = set()
b_colldirs = list_collection_dirs(coll_filter=collection_filter)
for b_path in b_colldirs:
path = to_text(b_path, errors='surrogate_or_strict')
collname = _get_collection_name_from_path(b_path)
roles_dir = os.path.join(path, 'roles')
if os.path.exists(roles_dir):
for entry in os.listdir(roles_dir):
# Check all potential spec files
for specfile in self.ROLE_ARGSPEC_FILES:
full_path = os.path.join(roles_dir, entry, 'meta', specfile)
if os.path.exists(full_path):
if name_filters is None:
found.add((entry, collname, path))
else:
# Name filters might contain a collection FQCN or not.
for fqcn in name_filters:
if len(fqcn.split('.')) == 3:
(ns, col, role) = fqcn.split('.')
if '.'.join([ns, col]) == collname and entry == role:
found.add((entry, collname, path))
elif fqcn == entry:
found.add((entry, collname, path))
break
return found
def _build_summary(self, role, collection, argspec):
"""Build a summary dict for a role.
Returns a simplified role arg spec containing only the role entry points and their
short descriptions, and the role collection name (if applicable).
:param role: The simple role name.
:param collection: The collection containing the role (None or empty string if N/A).
:param argspec: The complete role argspec data dict.
:returns: A tuple with the FQCN role name and a summary dict.
"""
if collection:
fqcn = '.'.join([collection, role])
else:
fqcn = role
summary = {}
summary['collection'] = collection
summary['entry_points'] = {}
for ep in argspec.keys():
entry_spec = argspec[ep] or {}
summary['entry_points'][ep] = entry_spec.get('short_description', '')
return (fqcn, summary)
def _build_doc(self, role, path, collection, argspec, entry_point):
if collection:
fqcn = '.'.join([collection, role])
else:
fqcn = role
doc = {}
doc['path'] = path
doc['collection'] = collection
doc['entry_points'] = {}
for ep in argspec.keys():
if entry_point is None or ep == entry_point:
entry_spec = argspec[ep] or {}
doc['entry_points'][ep] = entry_spec
# If we didn't add any entry points (b/c of filtering), ignore this entry.
if len(doc['entry_points'].keys()) == 0:
doc = None
return (fqcn, doc)
def _create_role_list(self, fail_on_errors=True):
"""Return a dict describing the listing of all roles with arg specs.
:param role_paths: A tuple of one or more role paths.
:returns: A dict indexed by role name, with 'collection' and 'entry_points' keys per role.
Example return:
results = {
'roleA': {
'collection': '',
'entry_points': {
'main': 'Short description for main'
}
},
'a.b.c.roleB': {
'collection': 'a.b.c',
'entry_points': {
'main': 'Short description for main',
'alternate': 'Short description for alternate entry point'
}
'x.y.z.roleB': {
'collection': 'x.y.z',
'entry_points': {
'main': 'Short description for main',
}
},
}
"""
roles_path = self._get_roles_path()
collection_filter = self._get_collection_filter()
if not collection_filter:
roles = self._find_all_normal_roles(roles_path)
else:
roles = []
collroles = self._find_all_collection_roles(collection_filter=collection_filter)
result = {}
for role, role_path in roles:
try:
argspec = self._load_argspec(role, role_path=role_path)
fqcn, summary = self._build_summary(role, '', argspec)
result[fqcn] = summary
except Exception as e:
if fail_on_errors:
raise
result[role] = {
'error': 'Error while loading role argument spec: %s' % to_native(e),
}
for role, collection, collection_path in collroles:
try:
argspec = self._load_argspec(role, collection_path=collection_path)
fqcn, summary = self._build_summary(role, collection, argspec)
result[fqcn] = summary
except Exception as e:
if fail_on_errors:
raise
result['%s.%s' % (collection, role)] = {
'error': 'Error while loading role argument spec: %s' % to_native(e),
}
return result
def _create_role_doc(self, role_names, entry_point=None, fail_on_errors=True):
"""
:param role_names: A tuple of one or more role names.
:param role_paths: A tuple of one or more role paths.
:param entry_point: A role entry point name for filtering.
:param fail_on_errors: When set to False, include errors in the JSON output instead of raising errors
:returns: A dict indexed by role name, with 'collection', 'entry_points', and 'path' keys per role.
"""
roles_path = self._get_roles_path()
roles = self._find_all_normal_roles(roles_path, name_filters=role_names)
collroles = self._find_all_collection_roles(name_filters=role_names)
result = {}
for role, role_path in roles:
try:
argspec = self._load_argspec(role, role_path=role_path)
fqcn, doc = self._build_doc(role, role_path, '', argspec, entry_point)
if doc:
result[fqcn] = doc
except Exception as e: # pylint:disable=broad-except
result[role] = {
'error': 'Error while processing role: %s' % to_native(e),
}
for role, collection, collection_path in collroles:
try:
argspec = self._load_argspec(role, collection_path=collection_path)
fqcn, doc = self._build_doc(role, collection_path, collection, argspec, entry_point)
if doc:
result[fqcn] = doc
except Exception as e: # pylint:disable=broad-except
result['%s.%s' % (collection, role)] = {
'error': 'Error while processing role: %s' % to_native(e),
}
return result
class DocCLI(CLI, RoleMixin):
''' displays information on modules installed in Ansible libraries.
It displays a terse listing of plugins and their short descriptions,
provides a printout of their DOCUMENTATION strings,
and it can create a short "snippet" which can be pasted into a playbook. '''
name = 'ansible-doc'
# default ignore list for detailed views
IGNORE = ('module', 'docuri', 'version_added', 'short_description', 'now_date', 'plainexamples', 'returndocs', 'collection')
# Warning: If you add more elements here, you also need to add it to the docsite build (in the
# ansible-community/antsibull repo)
_ITALIC = re.compile(r"\bI\(([^)]+)\)")
_BOLD = re.compile(r"\bB\(([^)]+)\)")
_MODULE = re.compile(r"\bM\(([^)]+)\)")
_LINK = re.compile(r"\bL\(([^)]+), *([^)]+)\)")
_URL = re.compile(r"\bU\(([^)]+)\)")
_REF = re.compile(r"\bR\(([^)]+), *([^)]+)\)")
_CONST = re.compile(r"\bC\(([^)]+)\)")
_RULER = re.compile(r"\bHORIZONTALLINE\b")
# rst specific
_RST_NOTE = re.compile(r".. note::")
_RST_SEEALSO = re.compile(r".. seealso::")
_RST_ROLES = re.compile(r":\w+?:`")
_RST_DIRECTIVES = re.compile(r".. \w+?::")
def __init__(self, args):
super(DocCLI, self).__init__(args)
self.plugin_list = set()
@classmethod
def find_plugins(cls, path, internal, plugin_type, coll_filter=None):
display.deprecated("find_plugins method as it is incomplete/incorrect. use ansible.plugins.list functions instead.", version='2.17')
return list_plugins(plugin_type, coll_filter, [path]).keys()
@classmethod
def tty_ify(cls, text):
# general formatting
t = cls._ITALIC.sub(r"`\1'", text) # I(word) => `word'
t = cls._BOLD.sub(r"*\1*", t) # B(word) => *word*
t = cls._MODULE.sub("[" + r"\1" + "]", t) # M(word) => [word]
t = cls._URL.sub(r"\1", t) # U(word) => word
t = cls._LINK.sub(r"\1 <\2>", t) # L(word, url) => word <url>
t = cls._REF.sub(r"\1", t) # R(word, sphinx-ref) => word
t = cls._CONST.sub(r"`\1'", t) # C(word) => `word'
t = cls._RULER.sub("\n{0}\n".format("-" * 13), t) # HORIZONTALLINE => -------
# remove rst
t = cls._RST_SEEALSO.sub(r"See website for:", t) # seealso is special and need to break
t = cls._RST_NOTE.sub(r"Note:", t) # .. note:: to note:
t = cls._RST_ROLES.sub(r"website for `", t) # remove :ref: and other tags
t = cls._RST_DIRECTIVES.sub(r"", t) # remove .. stuff:: in general
return t
def init_parser(self):
coll_filter = 'A supplied argument will be used for filtering, can be a namespace or full collection name.'
super(DocCLI, self).init_parser(
desc="plugin documentation tool",
epilog="See man pages for Ansible CLI options or website for tutorials https://docs.ansible.com"
)
opt_help.add_module_options(self.parser)
opt_help.add_basedir_options(self.parser)
# targets
self.parser.add_argument('args', nargs='*', help='Plugin', metavar='plugin')
self.parser.add_argument("-t", "--type", action="store", default='module', dest='type',
help='Choose which plugin type (defaults to "module"). '
'Available plugin types are : {0}'.format(TARGET_OPTIONS),
choices=TARGET_OPTIONS)
# formatting
self.parser.add_argument("-j", "--json", action="store_true", default=False, dest='json_format',
help='Change output into json format.')
# TODO: warn if not used with -t roles
# role-specific options
self.parser.add_argument("-r", "--roles-path", dest='roles_path', default=C.DEFAULT_ROLES_PATH,
type=opt_help.unfrack_path(pathsep=True),
action=opt_help.PrependListAction,
help='The path to the directory containing your roles.')
# modifiers
exclusive = self.parser.add_mutually_exclusive_group()
# TODO: warn if not used with -t roles
exclusive.add_argument("-e", "--entry-point", dest="entry_point",
help="Select the entry point for role(s).")
# TODO: warn with --json as it is incompatible
exclusive.add_argument("-s", "--snippet", action="store_true", default=False, dest='show_snippet',
help='Show playbook snippet for these plugin types: %s' % ', '.join(SNIPPETS))
# TODO: warn when arg/plugin is passed
exclusive.add_argument("-F", "--list_files", action="store_true", default=False, dest="list_files",
help='Show plugin names and their source files without summaries (implies --list). %s' % coll_filter)
exclusive.add_argument("-l", "--list", action="store_true", default=False, dest='list_dir',
help='List available plugins. %s' % coll_filter)
exclusive.add_argument("--metadata-dump", action="store_true", default=False, dest='dump',
help='**For internal use only** Dump json metadata for all entries, ignores other options.')
self.parser.add_argument("--no-fail-on-errors", action="store_true", default=False, dest='no_fail_on_errors',
help='**For internal use only** Only used for --metadata-dump. '
'Do not fail on errors. Report the error message in the JSON instead.')
def post_process_args(self, options):
options = super(DocCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def display_plugin_list(self, results):
# format for user
displace = max(len(x) for x in results.keys())
linelimit = display.columns - displace - 5
text = []
deprecated = []
# format display per option
if context.CLIARGS['list_files']:
# list plugin file names
for plugin in sorted(results.keys()):
filename = to_native(results[plugin])
# handle deprecated for builtin/legacy
pbreak = plugin.split('.')
if pbreak[-1].startswith('_') and pbreak[0] == 'ansible' and pbreak[1] in ('builtin', 'legacy'):
pbreak[-1] = pbreak[-1][1:]
plugin = '.'.join(pbreak)
deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename))
else:
text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename))
else:
# list plugin names and short desc
for plugin in sorted(results.keys()):
desc = DocCLI.tty_ify(results[plugin])
if len(desc) > linelimit:
desc = desc[:linelimit] + '...'
pbreak = plugin.split('.')
if pbreak[-1].startswith('_'): # Handle deprecated # TODO: add mark for deprecated collection plugins
pbreak[-1] = pbreak[-1][1:]
plugin = '.'.join(pbreak)
deprecated.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc))
else:
text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc))
if len(deprecated) > 0:
text.append("\nDEPRECATED:")
text.extend(deprecated)
# display results
DocCLI.pager("\n".join(text))
def _display_available_roles(self, list_json):
"""Display all roles we can find with a valid argument specification.
Output is: fqcn role name, entry point, short description
"""
roles = list(list_json.keys())
entry_point_names = set()
for role in roles:
for entry_point in list_json[role]['entry_points'].keys():
entry_point_names.add(entry_point)
max_role_len = 0
max_ep_len = 0
if roles:
max_role_len = max(len(x) for x in roles)
if entry_point_names:
max_ep_len = max(len(x) for x in entry_point_names)
linelimit = display.columns - max_role_len - max_ep_len - 5
text = []
for role in sorted(roles):
for entry_point, desc in list_json[role]['entry_points'].items():
if len(desc) > linelimit:
desc = desc[:linelimit] + '...'
text.append("%-*s %-*s %s" % (max_role_len, role,
max_ep_len, entry_point,
desc))
# display results
DocCLI.pager("\n".join(text))
def _display_role_doc(self, role_json):
roles = list(role_json.keys())
text = []
for role in roles:
text += self.get_role_man_text(role, role_json[role])
# display results
DocCLI.pager("\n".join(text))
@staticmethod
def _list_keywords():
return from_yaml(pkgutil.get_data('ansible', 'keyword_desc.yml'))
@staticmethod
def _get_keywords_docs(keys):
data = {}
descs = DocCLI._list_keywords()
for key in keys:
if key.startswith('with_'):
# simplify loops, dont want to handle every with_<lookup> combo
keyword = 'loop'
elif key == 'async':
# cause async became reserved in python we had to rename internally
keyword = 'async_val'
else:
keyword = key
try:
# if no desc, typeerror raised ends this block
kdata = {'description': descs[key]}
# get playbook objects for keyword and use first to get keyword attributes
kdata['applies_to'] = []
for pobj in PB_OBJECTS:
if pobj not in PB_LOADED:
obj_class = 'ansible.playbook.%s' % pobj.lower()
loaded_class = importlib.import_module(obj_class)
PB_LOADED[pobj] = getattr(loaded_class, pobj, None)
if keyword in PB_LOADED[pobj]._valid_attrs:
kdata['applies_to'].append(pobj)
# we should only need these once
if 'type' not in kdata:
fa = getattr(PB_LOADED[pobj], '_%s' % keyword)
if getattr(fa, 'private'):
kdata = {}
raise KeyError
kdata['type'] = getattr(fa, 'isa', 'string')
if keyword.endswith('when'):
kdata['template'] = 'implicit'
elif getattr(fa, 'static'):
kdata['template'] = 'static'
else:
kdata['template'] = 'explicit'
# those that require no processing
for visible in ('alias', 'priority'):
kdata[visible] = getattr(fa, visible)
# remove None keys
for k in list(kdata.keys()):
if kdata[k] is None:
del kdata[k]
data[key] = kdata
except (AttributeError, KeyError) as e:
display.warning("Skipping Invalid keyword '%s' specified: %s" % (key, to_text(e)))
if display.verbosity >= 3:
display.verbose(traceback.format_exc())
return data
def _get_collection_filter(self):
coll_filter = None
if len(context.CLIARGS['args']) == 1:
coll_filter = context.CLIARGS['args'][0]
if not AnsibleCollectionRef.is_valid_collection_name(coll_filter):
raise AnsibleError('Invalid collection name (must be of the form namespace.collection): {0}'.format(coll_filter))
elif len(context.CLIARGS['args']) > 1:
raise AnsibleOptionsError("Only a single collection filter is supported.")
return coll_filter
def _list_plugins(self, plugin_type, content):
results = {}
self.plugins = {}
loader = DocCLI._prep_loader(plugin_type)
coll_filter = self._get_collection_filter()
self.plugins.update(list_plugins(plugin_type, coll_filter, context.CLIARGS['module_path']))
# get appropriate content depending on option
if content == 'dir':
results = self._get_plugin_list_descriptions(loader)
elif content == 'files':
results = {k: self.plugins[k][0] for k in self.plugins.keys()}
else:
results = {k: {} for k in self.plugins.keys()}
self.plugin_list = set() # reset for next iteration
return results
def _get_plugins_docs(self, plugin_type, names, fail_ok=False, fail_on_errors=True):
loader = DocCLI._prep_loader(plugin_type)
search_paths = DocCLI.print_paths(loader)
# get the docs for plugins in the command line list
plugin_docs = {}
for plugin in names:
doc = {}
try:
doc, plainexamples, returndocs, metadata = DocCLI._get_plugin_doc(plugin, plugin_type, loader, search_paths)
except PluginNotFound:
display.warning("%s %s not found in:\n%s\n" % (plugin_type, plugin, search_paths))
continue
except Exception as e:
if not fail_on_errors:
plugin_docs[plugin] = {
'error': 'Missing documentation or could not parse documentation: %s' % to_native(e),
}
continue
display.vvv(traceback.format_exc())
msg = "%s %s missing documentation (or could not parse documentation): %s\n" % (plugin_type, plugin, to_native(e))
if fail_ok:
display.warning(msg)
else:
raise AnsibleError(msg)
if not doc:
# The doc section existed but was empty
if not fail_on_errors:
plugin_docs[plugin] = {
'error': 'No valid documentation found',
}
continue
docs = DocCLI._combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata)
if not fail_on_errors:
# Check whether JSON serialization would break
try:
json.dumps(docs, cls=AnsibleJSONEncoder)
except Exception as e: # pylint:disable=broad-except
plugin_docs[plugin] = {
'error': 'Cannot serialize documentation as JSON: %s' % to_native(e),
}
continue
plugin_docs[plugin] = docs
return plugin_docs
def _get_roles_path(self):
'''
Add any 'roles' subdir in playbook dir to the roles search path.
And as a last resort, add the playbook dir itself. Order being:
- 'roles' subdir of playbook dir
- DEFAULT_ROLES_PATH (default in cliargs)
- playbook dir (basedir)
NOTE: This matches logic in RoleDefinition._load_role_path() method.
'''
roles_path = context.CLIARGS['roles_path']
if context.CLIARGS['basedir'] is not None:
subdir = os.path.join(context.CLIARGS['basedir'], "roles")
if os.path.isdir(subdir):
roles_path = (subdir,) + roles_path
roles_path = roles_path + (context.CLIARGS['basedir'],)
return roles_path
@staticmethod
def _prep_loader(plugin_type):
''' return a plugint type specific loader '''
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
# add to plugin paths from command line
if context.CLIARGS['basedir'] is not None:
loader.add_directory(context.CLIARGS['basedir'], with_subdir=True)
if context.CLIARGS['module_path']:
for path in context.CLIARGS['module_path']:
if path:
loader.add_directory(path)
# save only top level paths for errors
loader._paths = None # reset so we can use subdirs later
return loader
def run(self):
super(DocCLI, self).run()
basedir = context.CLIARGS['basedir']
plugin_type = context.CLIARGS['type'].lower()
do_json = context.CLIARGS['json_format'] or context.CLIARGS['dump']
listing = context.CLIARGS['list_files'] or context.CLIARGS['list_dir']
if context.CLIARGS['list_files']:
content = 'files'
elif context.CLIARGS['list_dir']:
content = 'dir'
else:
content = None
docs = {}
if basedir:
AnsibleCollectionConfig.playbook_paths = basedir
if plugin_type not in TARGET_OPTIONS:
raise AnsibleOptionsError("Unknown or undocumentable plugin type: %s" % plugin_type)
if context.CLIARGS['dump']:
# we always dump all types, ignore restrictions
ptypes = TARGET_OPTIONS
docs['all'] = {}
for ptype in ptypes:
if ptype == 'role':
roles = self._create_role_list(fail_on_errors=not context.CLIARGS['no_fail_on_errors'])
docs['all'][ptype] = self._create_role_doc(
roles.keys(), context.CLIARGS['entry_point'], fail_on_errors=not context.CLIARGS['no_fail_on_errors'])
elif ptype == 'keyword':
names = DocCLI._list_keywords()
docs['all'][ptype] = DocCLI._get_keywords_docs(names.keys())
else:
plugin_names = self._list_plugins(ptype, None)
# TODO: remove exception for test/filter once all core ones are documented
docs['all'][ptype] = self._get_plugins_docs(ptype, plugin_names, fail_ok=(ptype in ('test', 'filter')),
fail_on_errors=not context.CLIARGS['no_fail_on_errors'])
# reset list after each type to avoid polution
elif listing:
if plugin_type == 'keyword':
docs = DocCLI._list_keywords()
elif plugin_type == 'role':
docs = self._create_role_list()
else:
docs = self._list_plugins(plugin_type, content)
else:
# here we require a name
if len(context.CLIARGS['args']) == 0:
raise AnsibleOptionsError("Missing name(s), incorrect options passed for detailed documentation.")
if plugin_type == 'keyword':
docs = DocCLI._get_keywords_docs(context.CLIARGS['args'])
elif plugin_type == 'role':
docs = self._create_role_doc(context.CLIARGS['args'], context.CLIARGS['entry_point'])
else:
# display specific plugin docs
docs = self._get_plugins_docs(plugin_type, context.CLIARGS['args'])
# Display the docs
if do_json:
jdump(docs)
else:
text = []
if plugin_type in C.DOCUMENTABLE_PLUGINS:
if listing and docs:
self.display_plugin_list(docs)
elif context.CLIARGS['show_snippet']:
if plugin_type not in SNIPPETS:
raise AnsibleError('Snippets are only available for the following plugin'
' types: %s' % ', '.join(SNIPPETS))
for plugin, doc_data in docs.items():
try:
textret = DocCLI.format_snippet(plugin, plugin_type, doc_data['doc'])
except ValueError as e:
display.warning("Unable to construct a snippet for"
" '{0}': {1}".format(plugin, to_text(e)))
else:
text.append(textret)
else:
# Some changes to how plain text docs are formatted
for plugin, doc_data in docs.items():
textret = DocCLI.format_plugin_doc(plugin, plugin_type,
doc_data['doc'], doc_data['examples'],
doc_data['return'], doc_data['metadata'])
if textret:
text.append(textret)
else:
display.warning("No valid documentation was retrieved from '%s'" % plugin)
elif plugin_type == 'role':
if context.CLIARGS['list_dir'] and docs:
self._display_available_roles(docs)
elif docs:
self._display_role_doc(docs)
elif docs:
text = DocCLI._dump_yaml(docs, '')
if text:
DocCLI.pager(''.join(text))
return 0
@staticmethod
def get_all_plugins_of_type(plugin_type):
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
paths = loader._get_paths_with_context()
plugins = {}
for path_context in paths:
plugins.update(list_plugins(plugin_type, searc_path=context.CLIARGS['module_path']))
return sorted(plugins.keys())
@staticmethod
def get_plugin_metadata(plugin_type, plugin_name):
# if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
result = loader.find_plugin_with_context(plugin_name, mod_type='.py', ignore_deprecated=True, check_aliases=True)
if not result.resolved:
raise AnsibleError("unable to load {0} plugin named {1} ".format(plugin_type, plugin_name))
filename = result.plugin_resolved_path
collection_name = result.plugin_resolved_collection
try:
doc, __, __, __ = get_docstring(filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0),
collection_name=collection_name, plugin_type=plugin_type)
except Exception:
display.vvv(traceback.format_exc())
raise AnsibleError("%s %s at %s has a documentation formatting error or is missing documentation." % (plugin_type, plugin_name, filename))
if doc is None:
# Removed plugins don't have any documentation
return None
return dict(
name=plugin_name,
namespace=DocCLI.namespace_from_plugin_filepath(filename, plugin_name, loader.package_path),
description=doc.get('short_description', "UNKNOWN"),
version_added=doc.get('version_added', "UNKNOWN")
)
@staticmethod
def namespace_from_plugin_filepath(filepath, plugin_name, basedir):
if not basedir.endswith('/'):
basedir += '/'
rel_path = filepath.replace(basedir, '')
extension_free = os.path.splitext(rel_path)[0]
namespace_only = extension_free.rsplit(plugin_name, 1)[0].strip('/_')
clean_ns = namespace_only.replace('/', '.')
if clean_ns == '':
clean_ns = None
return clean_ns
@staticmethod
def _get_plugin_doc(plugin, plugin_type, loader, search_paths):
# if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
for ext in C.DOC_EXTENSIONS:
result = loader.find_plugin_with_context(plugin, mod_type=ext, ignore_deprecated=True, check_aliases=True)
if result.resolved:
break
else:
if not result.resolved:
raise PluginNotFound('%s was not found in %s' % (plugin, search_paths))
filename = result.plugin_resolved_path
collection_name = result.plugin_resolved_collection
doc, plainexamples, returndocs, metadata = get_docstring(
filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0),
collection_name=collection_name, plugin_type=plugin_type)
# If the plugin existed but did not have a DOCUMENTATION element and was not removed, it's an error
if doc is None:
raise ValueError('%s did not contain a DOCUMENTATION attribute' % plugin)
doc['filename'] = filename
doc['collection'] = collection_name
return doc, plainexamples, returndocs, metadata
@staticmethod
def _combine_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata):
# generate extra data
if plugin_type == 'module':
# is there corresponding action plugin?
if plugin in action_loader:
doc['has_action'] = True
else:
doc['has_action'] = False
# return everything as one dictionary
return {'doc': doc, 'examples': plainexamples, 'return': returndocs, 'metadata': metadata}
@staticmethod
def format_snippet(plugin, plugin_type, doc):
''' return heavily commented plugin use to insert into play '''
if plugin_type == 'inventory' and doc.get('options', {}).get('plugin'):
# these do not take a yaml config that we can write a snippet for
raise ValueError('The {0} inventory plugin does not take YAML type config source'
' that can be used with the "auto" plugin so a snippet cannot be'
' created.'.format(plugin))
text = []
if plugin_type == 'lookup':
text = _do_lookup_snippet(doc)
elif 'options' in doc:
text = _do_yaml_snippet(doc)
text.append('')
return "\n".join(text)
@staticmethod
def format_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata):
collection_name = doc['collection']
# TODO: do we really want this?
# add_collection_to_versions_and_dates(doc, '(unknown)', is_module=(plugin_type == 'module'))
# remove_current_collection_from_versions_and_dates(doc, collection_name, is_module=(plugin_type == 'module'))
# remove_current_collection_from_versions_and_dates(
# returndocs, collection_name, is_module=(plugin_type == 'module'), return_docs=True)
# assign from other sections
doc['plainexamples'] = plainexamples
doc['returndocs'] = returndocs
doc['metadata'] = metadata
try:
text = DocCLI.get_man_text(doc, collection_name, plugin_type)
except Exception as e:
display.vvv(traceback.format_exc())
raise AnsibleError("Unable to retrieve documentation from '%s' due to: %s" % (plugin, to_native(e)), orig_exc=e)
return text
def _get_plugin_list_descriptions(self, loader):
descs = {}
for plugin in self.plugins.keys():
doc = None
filename = Path(to_native(self.plugins[plugin][0]))
docerror = None
try:
doc = read_docstub(filename)
except Exception as e:
docerror = e
# plugin file was empty or had error, lets try other options
if doc is None:
# handle test/filters that are in file with diff name
base = plugin.split('.')[-1]
basefile = filename.with_name(base + filename.suffix)
for extension in ('.py', '.yml', '.yaml'): # TODO: constant?
docfile = basefile.with_suffix(extension)
try:
if docfile.exists():
doc = read_docstub(docfile)
except Exception as e:
docerror = e
if docerror:
display.warning("%s has a documentation formatting error: %s" % (plugin, docerror))
continue
if not doc or not isinstance(doc, dict):
desc = 'UNDOCUMENTED'
else:
desc = doc.get('short_description', 'INVALID SHORT DESCRIPTION').strip()
descs[plugin] = desc
return descs
@staticmethod
def print_paths(finder):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in finder._get_paths(subdirs=False):
i = to_text(i, errors='surrogate_or_strict')
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
@staticmethod
def _dump_yaml(struct, indent):
return DocCLI.tty_ify('\n'.join([indent + line for line in yaml_dump(struct, default_flow_style=False, Dumper=AnsibleDumper).split('\n')]))
@staticmethod
def _format_version_added(version_added, version_added_collection=None):
if version_added_collection == 'ansible.builtin':
version_added_collection = 'ansible-core'
# In ansible-core, version_added can be 'historical'
if version_added == 'historical':
return 'historical'
if version_added_collection:
version_added = '%s of %s' % (version_added, version_added_collection)
return 'version %s' % (version_added, )
@staticmethod
def add_fields(text, fields, limit, opt_indent, return_values=False, base_indent=''):
for o in sorted(fields):
# Create a copy so we don't modify the original (in case YAML anchors have been used)
opt = dict(fields[o])
required = opt.pop('required', False)
if not isinstance(required, bool):
raise AnsibleError("Incorrect value for 'Required', a boolean is needed.: %s" % required)
if required:
opt_leadin = "="
else:
opt_leadin = "-"
text.append("%s%s %s" % (base_indent, opt_leadin, o))
if 'description' not in opt:
raise AnsibleError("All (sub-)options and return values must have a 'description' field")
if isinstance(opt['description'], list):
for entry_idx, entry in enumerate(opt['description'], 1):
if not isinstance(entry, string_types):
raise AnsibleError("Expected string in description of %s at index %s, got %s" % (o, entry_idx, type(entry)))
text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
else:
if not isinstance(opt['description'], string_types):
raise AnsibleError("Expected string in description of %s, got %s" % (o, type(opt['description'])))
text.append(textwrap.fill(DocCLI.tty_ify(opt['description']), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
del opt['description']
aliases = ''
if 'aliases' in opt:
if len(opt['aliases']) > 0:
aliases = "(Aliases: " + ", ".join(to_text(i) for i in opt['aliases']) + ")"
del opt['aliases']
choices = ''
if 'choices' in opt:
if len(opt['choices']) > 0:
choices = "(Choices: " + ", ".join(to_text(i) for i in opt['choices']) + ")"
del opt['choices']
default = ''
if not return_values:
if 'default' in opt or not required:
default = "[Default: %s" % to_text(opt.pop('default', '(null)')) + "]"
text.append(textwrap.fill(DocCLI.tty_ify(aliases + choices + default), limit,
initial_indent=opt_indent, subsequent_indent=opt_indent))
suboptions = []
for subkey in ('options', 'suboptions', 'contains', 'spec'):
if subkey in opt:
suboptions.append((subkey, opt.pop(subkey)))
conf = {}
for config in ('env', 'ini', 'yaml', 'vars', 'keyword'):
if config in opt and opt[config]:
# Create a copy so we don't modify the original (in case YAML anchors have been used)
conf[config] = [dict(item) for item in opt.pop(config)]
for ignore in DocCLI.IGNORE:
for item in conf[config]:
if ignore in item:
del item[ignore]
if 'cli' in opt and opt['cli']:
conf['cli'] = []
for cli in opt['cli']:
if 'option' not in cli:
conf['cli'].append({'name': cli['name'], 'option': '--%s' % cli['name'].replace('_', '-')})
else:
conf['cli'].append(cli)
del opt['cli']
if conf:
text.append(DocCLI._dump_yaml({'set_via': conf}, opt_indent))
version_added = opt.pop('version_added', None)
version_added_collection = opt.pop('version_added_collection', None)
for k in sorted(opt):
if k.startswith('_'):
continue
if isinstance(opt[k], string_types):
text.append('%s%s: %s' % (opt_indent, k,
textwrap.fill(DocCLI.tty_ify(opt[k]),
limit - (len(k) + 2),
subsequent_indent=opt_indent)))
elif isinstance(opt[k], (Sequence)) and all(isinstance(x, string_types) for x in opt[k]):
text.append(DocCLI.tty_ify('%s%s: %s' % (opt_indent, k, ', '.join(opt[k]))))
else:
text.append(DocCLI._dump_yaml({k: opt[k]}, opt_indent))
if version_added:
text.append("%sadded in: %s\n" % (opt_indent, DocCLI._format_version_added(version_added, version_added_collection)))
for subkey, subdata in suboptions:
text.append('')
text.append("%s%s:\n" % (opt_indent, subkey.upper()))
DocCLI.add_fields(text, subdata, limit, opt_indent + ' ', return_values, opt_indent)
if not suboptions:
text.append('')
def get_role_man_text(self, role, role_json):
'''Generate text for the supplied role suitable for display.
This is similar to get_man_text(), but roles are different enough that we have
a separate method for formatting their display.
:param role: The role name.
:param role_json: The JSON for the given role as returned from _create_role_doc().
:returns: A array of text suitable for displaying to screen.
'''
text = []
opt_indent = " "
pad = display.columns * 0.20
limit = max(display.columns - int(pad), 70)
text.append("> %s (%s)\n" % (role.upper(), role_json.get('path')))
for entry_point in role_json['entry_points']:
doc = role_json['entry_points'][entry_point]
if doc.get('short_description'):
text.append("ENTRY POINT: %s - %s\n" % (entry_point, doc.get('short_description')))
else:
text.append("ENTRY POINT: %s\n" % entry_point)
if doc.get('description'):
if isinstance(doc['description'], list):
desc = " ".join(doc['description'])
else:
desc = doc['description']
text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc),
limit, initial_indent=opt_indent,
subsequent_indent=opt_indent))
if doc.get('options'):
text.append("OPTIONS (= is mandatory):\n")
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
text.append('')
if doc.get('attributes'):
text.append("ATTRIBUTES:\n")
text.append(DocCLI._dump_yaml(doc.pop('attributes'), opt_indent))
text.append('')
# generic elements we will handle identically
for k in ('author',):
if k not in doc:
continue
if isinstance(doc[k], string_types):
text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]),
limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)):
text.append('%s: %s' % (k.upper(), ', '.join(doc[k])))
else:
# use empty indent since this affects the start of the yaml doc, not it's keys
text.append(DocCLI._dump_yaml({k.upper(): doc[k]}, ''))
text.append('')
return text
@staticmethod
def get_man_text(doc, collection_name='', plugin_type=''):
# Create a copy so we don't modify the original
doc = dict(doc)
DocCLI.IGNORE = DocCLI.IGNORE + (context.CLIARGS['type'],)
opt_indent = " "
text = []
pad = display.columns * 0.20
limit = max(display.columns - int(pad), 70)
plugin_name = doc.get(context.CLIARGS['type'], doc.get('name')) or doc.get('plugin_type') or plugin_type
if collection_name:
plugin_name = '%s.%s' % (collection_name, plugin_name)
text.append("> %s (%s)\n" % (plugin_name.upper(), doc.pop('filename')))
if isinstance(doc['description'], list):
desc = " ".join(doc.pop('description'))
else:
desc = doc.pop('description')
text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent,
subsequent_indent=opt_indent))
if 'version_added' in doc:
version_added = doc.pop('version_added')
version_added_collection = doc.pop('version_added_collection', None)
text.append("ADDED IN: %s\n" % DocCLI._format_version_added(version_added, version_added_collection))
if doc.get('deprecated', False):
text.append("DEPRECATED: \n")
if isinstance(doc['deprecated'], dict):
if 'removed_at_date' in doc['deprecated']:
text.append(
"\tReason: %(why)s\n\tWill be removed in a release after %(removed_at_date)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated')
)
else:
if 'version' in doc['deprecated'] and 'removed_in' not in doc['deprecated']:
doc['deprecated']['removed_in'] = doc['deprecated']['version']
text.append("\tReason: %(why)s\n\tWill be removed in: Ansible %(removed_in)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated'))
else:
text.append("%s" % doc.pop('deprecated'))
text.append("\n")
if doc.pop('has_action', False):
text.append(" * note: %s\n" % "This module has a corresponding action plugin.")
if doc.get('options', False):
text.append("OPTIONS (= is mandatory):\n")
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
text.append('')
if doc.get('attributes', False):
text.append("ATTRIBUTES:\n")
text.append(DocCLI._dump_yaml(doc.pop('attributes'), opt_indent))
text.append('')
if doc.get('notes', False):
text.append("NOTES:")
for note in doc['notes']:
text.append(textwrap.fill(DocCLI.tty_ify(note), limit - 6,
initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append('')
text.append('')
del doc['notes']
if doc.get('seealso', False):
text.append("SEE ALSO:")
for item in doc['seealso']:
if 'module' in item:
text.append(textwrap.fill(DocCLI.tty_ify('Module %s' % item['module']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description', 'The official documentation on the %s module.' % item['module'])
text.append(textwrap.fill(DocCLI.tty_ify(description), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('modules/%s_module.html' % item['module'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'name' in item and 'link' in item and 'description' in item:
text.append(textwrap.fill(DocCLI.tty_ify(item['name']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(textwrap.fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(item['link']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
elif 'ref' in item and 'description' in item:
text.append(textwrap.fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(textwrap.fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append('')
text.append('')
del doc['seealso']
if doc.get('requirements', False):
req = ", ".join(doc.pop('requirements'))
text.append("REQUIREMENTS:%s\n" % textwrap.fill(DocCLI.tty_ify(req), limit - 16, initial_indent=" ", subsequent_indent=opt_indent))
# Generic handler
for k in sorted(doc):
if k in DocCLI.IGNORE or not doc[k]:
continue
if isinstance(doc[k], string_types):
text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)):
text.append('%s: %s' % (k.upper(), ', '.join(doc[k])))
else:
# use empty indent since this affects the start of the yaml doc, not it's keys
text.append(DocCLI._dump_yaml({k.upper(): doc[k]}, ''))
del doc[k]
text.append('')
if doc.get('plainexamples', False):
text.append("EXAMPLES:")
text.append('')
if isinstance(doc['plainexamples'], string_types):
text.append(doc.pop('plainexamples').strip())
else:
text.append(yaml_dump(doc.pop('plainexamples'), indent=2, default_flow_style=False))
text.append('')
text.append('')
if doc.get('returndocs', False):
text.append("RETURN VALUES:")
DocCLI.add_fields(text, doc.pop('returndocs'), limit, opt_indent, return_values=True)
return "\n".join(text)
def _do_yaml_snippet(doc):
text = []
mdesc = DocCLI.tty_ify(doc['short_description'])
module = doc.get('module')
if module:
# this is actually a usable task!
text.append("- name: %s" % (mdesc))
text.append(" %s:" % (module))
else:
# just a comment, hopefully useful yaml file
text.append("# %s:" % doc.get('plugin', doc.get('name')))
pad = 29
subdent = '# '.rjust(pad + 2)
limit = display.columns - pad
for o in sorted(doc['options'].keys()):
opt = doc['options'][o]
if isinstance(opt['description'], string_types):
desc = DocCLI.tty_ify(opt['description'])
else:
desc = DocCLI.tty_ify(" ".join(opt['description']))
required = opt.get('required', False)
if not isinstance(required, bool):
raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required)
o = '%s:' % o
if module:
if required:
desc = "(required) %s" % desc
text.append(" %-20s # %s" % (o, textwrap.fill(desc, limit, subsequent_indent=subdent)))
else:
if required:
default = '(required)'
else:
default = opt.get('default', 'None')
text.append("%s %-9s # %s" % (o, default, textwrap.fill(desc, limit, subsequent_indent=subdent, max_lines=3)))
return text
def _do_lookup_snippet(doc):
text = []
snippet = "lookup('%s', " % doc.get('plugin', doc.get('name'))
comment = []
for o in sorted(doc['options'].keys()):
opt = doc['options'][o]
comment.append('# %s(%s): %s' % (o, opt.get('type', 'string'), opt.get('description', '')))
if o in ('_terms', '_raw', '_list'):
# these are 'list of arguments'
snippet += '< %s >' % (o)
continue
required = opt.get('required', False)
if not isinstance(required, bool):
raise ValueError("Incorrect value for 'Required', a boolean is needed: %s" % required)
if required:
default = '<REQUIRED>'
else:
default = opt.get('default', 'None')
if opt.get('type') in ('string', 'str'):
snippet += ", %s='%s'" % (o, default)
else:
snippet += ', %s=%s' % (o, default)
snippet += ")"
if comment:
text.extend(comment)
text.append('')
text.append(snippet)
return text
def main(args=None):
DocCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,644 |
ansible-config does not dump config in parseable format
|
##### SUMMARY
While `ansible-config dump` is a very useful tool for determining the configuration used by Ansible at runtime, it does have a major flaw: its output is not machine parseable.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-config
##### ADDITIONAL INFORMATION
ansible-config should have an option to dump config in a machine parseable format like JSON or YAML, so it can be consumed by other tools.
|
https://github.com/ansible/ansible/issues/73644
|
https://github.com/ansible/ansible/pull/77447
|
a3cc6a581ef191faf1c9ec102d1c116c4c2a8778
|
a12e0a0e874c6c0d18a1a2d83dcb106d207136af
| 2021-02-18T10:28:01Z |
python
| 2022-04-28T23:58:58Z |
lib/ansible/module_utils/common/json.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import datetime
from ansible.module_utils._text import to_text
from ansible.module_utils.common._collections_compat import Mapping
from ansible.module_utils.common.collections import is_sequence
def _is_unsafe(value):
return getattr(value, '__UNSAFE__', False) and not getattr(value, '__ENCRYPTED__', False)
def _is_vault(value):
return getattr(value, '__ENCRYPTED__', False)
def _preprocess_unsafe_encode(value):
"""Recursively preprocess a data structure converting instances of ``AnsibleUnsafe``
into their JSON dict representations
Used in ``AnsibleJSONEncoder.iterencode``
"""
if _is_unsafe(value):
value = {'__ansible_unsafe': to_text(value, errors='surrogate_or_strict', nonstring='strict')}
elif is_sequence(value):
value = [_preprocess_unsafe_encode(v) for v in value]
elif isinstance(value, Mapping):
value = dict((k, _preprocess_unsafe_encode(v)) for k, v in value.items())
return value
class AnsibleJSONEncoder(json.JSONEncoder):
'''
Simple encoder class to deal with JSON encoding of Ansible internal types
'''
def __init__(self, preprocess_unsafe=False, vault_to_text=False, **kwargs):
self._preprocess_unsafe = preprocess_unsafe
self._vault_to_text = vault_to_text
super(AnsibleJSONEncoder, self).__init__(**kwargs)
# NOTE: ALWAYS inform AWS/Tower when new items get added as they consume them downstream via a callback
def default(self, o):
if getattr(o, '__ENCRYPTED__', False):
# vault object
if self._vault_to_text:
value = to_text(o, errors='surrogate_or_strict')
else:
value = {'__ansible_vault': to_text(o._ciphertext, errors='surrogate_or_strict', nonstring='strict')}
elif getattr(o, '__UNSAFE__', False):
# unsafe object, this will never be triggered, see ``AnsibleJSONEncoder.iterencode``
value = {'__ansible_unsafe': to_text(o, errors='surrogate_or_strict', nonstring='strict')}
elif isinstance(o, Mapping):
# hostvars and other objects
value = dict(o)
elif isinstance(o, (datetime.date, datetime.datetime)):
# date object
value = o.isoformat()
else:
# use default encoder
value = super(AnsibleJSONEncoder, self).default(o)
return value
def iterencode(self, o, **kwargs):
"""Custom iterencode, primarily design to handle encoding ``AnsibleUnsafe``
as the ``AnsibleUnsafe`` subclasses inherit from string types and
``json.JSONEncoder`` does not support custom encoders for string types
"""
if self._preprocess_unsafe:
o = _preprocess_unsafe_encode(o)
return super(AnsibleJSONEncoder, self).iterencode(o, **kwargs)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,698 |
Ansible Unable To Run win_powershell with become: yes (Win32ErrorCode 1314)
|
### Summary
https://docs.ansible.com/ansible/latest/user_guide/become.html#limitations-of-become-on-windows
I am currently working on getting a AD service account built for Ansible to use to manage member servers and domain controllers in our domain. Both our Linux and Windows machines are joined to the same domain but for the sake of this issue, I am limiting my tests to Windows. I am using SSH to connect into our Windows hosts.
I have followed the documentation linked and am now able to login to our hosts via SSH and run anything that does not use `become: yes`. However, when I try the following code block I get this error.
```
- name: Base Connection Tester
hosts: tester-host
gather_facts: no
#debugger: on_failed
tasks:
# Windows
- ansible.windows.win_whoami:
- ansible.windows.win_powershell:
script: |
Write-Output "Hello world this is $(whoami) on $($env:COMPUTERNAME)"
become: yes
register: pwsh_output
- debug:
msg: "{{ pwsh_output.output[0] }}"
vars:
ansible_user: "[email protected]"
ansible_become_user: "[email protected]"
ansible_ssh_pass: "redacted"
ansible_become_pass: "redacted"
ansible_connection: ssh
ansible_shell_type: powershell
ansible_become_method: runas
ansible_platform: windows
ansible_shell_executable: None
```
```
TASK [ansible.windows.win_whoami]
ok: [tester-host] => {
.
.
.
"login_domain": "DOMAIN",
"login_time": "2022-04-28T19:13:07.1060743+00:00",
"logon_id": 14989898,
"logon_server": "TESTER-HOST",
"logon_type": "NetworkCleartext",
"privileges": {
"SeChangeNotifyPrivilege": "enabled-by-default",
"SeDebugPrivilege": "enabled-by-default",
"SeIncreaseWorkingSetPrivilege": "enabled-by-default"
},
"rights": [],
"token_type": "TokenPrimary",
"upn": "[email protected]",
"user_flags": []
}
.
.
.
TASK [ansible.windows.win_powershell]
.
.
.
The full traceback is:
Exception calling "CreateProcessAsUser" with "9" argument(s): "CreateProcessWithTokenW() failed (A required privilege is not held by the client, Win32ErrorCode 1314)"
At line:103 char:5
+ $result = [Ansible.Become.BecomeUtil]::CreateProcessAsUser($usern ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : Win32Exception
ScriptStackTrace:
at <ScriptBlock>, <No file>: line 103
at <ScriptBlock><End>, <No file>: line 137
at <ScriptBlock>, <No file>: line 11
System.Management.Automation.MethodInvocationException: Exception calling "CreateProcessAsUser" with "9" argument(s): "CreateProcessWithTokenW() failed (A required privilege is not held by the client, Win32ErrorCode 1314)" ---> Ansible.Process.Win32Exception: CreateProcessWithTokenW() failed (A required privilege is not held by the client, Win32ErrorCode 1314)
at Ansible.Become.BecomeUtil.CreateProcessAsUser(String username, String password, LogonFlags logonFlags, LogonType logonType, String lpApplicationName, String lpCommandLine, String lpCurrentDirectory, IDictionary environment, Byte[] stdin) in c:\Users\ansible\AppData\Local\Temp\qcsb3ejc.1.cs:line 309
at CallSite.Target(Closure , CallSite , Type , Object , Object , Object , Object , Object , Object , Object , Object , Object )
--- End of inner exception stack trace ---
at System.Management.Automation.ExceptionHandlingOps.CheckActionPreference(FunctionContext funcContext, Exception exception)
at System.Management.Automation.Interpreter.ActionCallInstruction`2.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
fatal: [tester-host]: FAILED! => {
"changed": false,
"msg": "internal error: failed to become user '[email protected]': Exception calling \"CreateProcessAsUser\" with \"9\" argument(s): \"CreateProcessWithTokenW() failed (A required privilege is not held by the client, Win32ErrorCode 1314)\""
}
```
In the documentation it says that the user needs to inherit the `SeAllowLogOnLocally` privilege in order for the become process to succeed. After some research I found [this issue from 2018](https://github.com/ansible/ansible/issues/40310) which seemed similar to my issue. I followed their verification steps and can confirm that the ansible user is getting the permissions it needs. On the tester-host I checked the Local Security Policy and saw the following.

My ansible account can log on locally, debug programs, and log on as a service and yes in the `win_whoami` output I do not see this `SeAllowLogOnLocally` privilege listed which is puzzling.
My end goal is to create a least privilege domain account that Ansible can use to manage our Windows domain joined instances (and hopefully even the domain controllers). I would love to update the documentation to be more detailed but first I have to figure out what exactly I am missing here. Any help or tips that anyone is willing to share would be greatly appreciated.
Thanks!
### Issue Type
Documentation Report
### Component Name
https://docs.ansible.com/ansible/latest/user_guide/become.html
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.2]
config file = /home/picnicsecurity/ansible.cfg
configured module search path = ['/home/picnicsecurity/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/picnicsecurity/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
Provides a straightforward guide on how to allow Ansible to SSH into domain joined Windows hosts since not everyone uses WinRM.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77698
|
https://github.com/ansible/ansible/pull/77722
|
fbc5b3f9c520f3c360020fa5bf4290f8b875203e
|
dd094a4413ff7f8af018f105c296005675215236
| 2022-04-29T14:52:03Z |
python
| 2022-05-05T14:16:37Z |
docs/docsite/rst/user_guide/become.rst
|
.. _become:
******************************************
Understanding privilege escalation: become
******************************************
Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user's permissions. Because this feature allows you to 'become' another user, different from the user that logged into the machine (remote user), we call it ``become``. The ``become`` keyword uses existing privilege escalation tools like `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others.
.. contents::
:local:
Using become
============
You can control the use of ``become`` with play or task directives, connection variables, or at the command line. If you set privilege escalation properties in multiple ways, review the :ref:`general precedence rules<general_precedence_rules>` to understand which settings will be used.
A full list of all become plugins that are included in Ansible can be found in the :ref:`become_plugin_list`.
Become directives
-----------------
You can set the directives that control ``become`` at the play or task level. You can override these by setting connection variables, which often differ from one host to another. These variables and directives are independent. For example, setting ``become_user`` does not set ``become``.
become
set to ``yes`` to activate privilege escalation.
become_user
set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply ``become: yes``, to allow it to be set at host level. Default value is ``root``.
become_method
(at play or task level) overrides the default method set in ansible.cfg, set to use any of the :ref:`become_plugins`.
become_flags
(at play or task level) permit the use of specific flags for the tasks or role. One common use is to change the user to nobody when the shell is set to nologin. Added in Ansible 2.2.
For example, to manage a system service (which requires ``root`` privileges) when connected as a non-``root`` user, you can use the default value of ``become_user`` (``root``):
.. code-block:: yaml
- name: Ensure the httpd service is running
service:
name: httpd
state: started
become: yes
To run a command as the ``apache`` user:
.. code-block:: yaml
- name: Run a command as the apache user
command: somecommand
become: yes
become_user: apache
To do something as the ``nobody`` user when the shell is nologin:
.. code-block:: yaml
- name: Run a command as nobody
command: somecommand
become: yes
become_method: su
become_user: nobody
become_flags: '-s /bin/sh'
To specify a password for sudo, run ``ansible-playbook`` with ``--ask-become-pass`` (``-K`` for short).
If you run a playbook utilizing ``become`` and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with `CTRL-c`, then execute the playbook with ``-K`` and the appropriate password.
Become connection variables
---------------------------
You can define different ``become`` options for each managed node or group. You can define these variables in inventory or use them as normal variables.
ansible_become
overrides the ``become`` directive, decides if privilege escalation is used or not.
ansible_become_method
which privilege escalation method should be used
ansible_become_user
set the user you become through privilege escalation; does not imply ``ansible_become: yes``
ansible_become_password
set the privilege escalation password. See :ref:`playbooks_vault` for details on how to avoid having secrets in plain text
ansible_common_remote_group
determines if Ansible should try to ``chgrp`` its temporary files to a group if ``setfacl`` and ``chown`` both fail. See `Risks of becoming an unprivileged user`_ for more information. Added in version 2.10.
For example, if you want to run all tasks as ``root`` on a server named ``webserver``, but you can only connect as the ``manager`` user, you could use an inventory entry like this:
.. code-block:: text
webserver ansible_user=manager ansible_become=yes
.. note::
The variables defined above are generic for all become plugins but plugin specific ones can also be set instead.
Please see the documentation for each plugin for a list of all options the plugin has and how they can be defined.
A full list of become plugins in Ansible can be found at :ref:`become_plugins`.
Become command-line options
---------------------------
--ask-become-pass, -K
ask for privilege escalation password; does not imply become will be used. Note that this password will be used for all hosts.
--become, -b
run operations with become (no password implied)
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo),
valid choices: [ sudo | su | pbrun | pfexec | doas | dzdo | ksu | runas | machinectl ]
--become-user=BECOME_USER
run operations as this user (default=root), does not imply --become/-b
Risks and limitations of become
===============================
Although privilege escalation is mostly intuitive, there are a few limitations
on how it works. Users should be aware of these to avoid surprises.
Risks of becoming an unprivileged user
--------------------------------------
Ansible modules are executed on the remote machine by first substituting the
parameters into the module file, then copying the file to the remote machine,
and finally executing it there.
Everything is fine if the module file is executed without using ``become``,
when the ``become_user`` is root, or when the connection to the remote machine
is made as root. In these cases Ansible creates the module file with
permissions that only allow reading by the user and root, or only allow reading
by the unprivileged user being switched to.
However, when both the connection user and the ``become_user`` are unprivileged,
the module file is written as the user that Ansible connects as (the
``remote_user``), but the file needs to be readable by the user Ansible is set
to ``become``. The details of how Ansible solves this can vary based on platform.
However, on POSIX systems, Ansible solves this problem in the following way:
First, if :command:`setfacl` is installed and available in the remote ``PATH``,
and the temporary directory on the remote host is mounted with POSIX.1e
filesystem ACL support, Ansible will use POSIX ACLs to share the module file
with the second unprivileged user.
Next, if POSIX ACLs are **not** available or :command:`setfacl` could not be
run, Ansible will attempt to change ownership of the module file using
:command:`chown` for systems which support doing so as an unprivileged user.
New in Ansible 2.11, at this point, Ansible will try :command:`chmod +a` which
is a macOS-specific way of setting ACLs on files.
New in Ansible 2.10, if all of the above fails, Ansible will then check the
value of the configuration setting ``ansible_common_remote_group``. Many
systems will allow a given user to change the group ownership of a file to a
group the user is in. As a result, if the second unprivileged user (the
``become_user``) has a UNIX group in common with the user Ansible is connected
as (the ``remote_user``), and if ``ansible_common_remote_group`` is defined to
be that group, Ansible can try to change the group ownership of the module file
to that group by using :command:`chgrp`, thereby likely making it readable to
the ``become_user``.
At this point, if ``ansible_common_remote_group`` was defined and a
:command:`chgrp` was attempted and returned successfully, Ansible assumes (but,
importantly, does not check) that the new group ownership is enough and does not
fall back further. That is, Ansible **does not check** that the ``become_user``
does in fact share a group with the ``remote_user``; so long as the command
exits successfully, Ansible considers the result successful and does not proceed
to check ``allow_world_readable_tmpfiles`` per below.
If ``ansible_common_remote_group`` is **not** set and the chown above it failed,
or if ``ansible_common_remote_group`` *is* set but the :command:`chgrp` (or
following group-permissions :command:`chmod`) returned a non-successful exit
code, Ansible will lastly check the value of
``allow_world_readable_tmpfiles``. If this is set, Ansible will place the module
file in a world-readable temporary directory, with world-readable permissions to
allow the ``become_user`` (and incidentally any other user on the system) to
read the contents of the file. **If any of the parameters passed to the module
are sensitive in nature, and you do not trust the remote machines, then this is
a potential security risk.**
Once the module is done executing, Ansible deletes the temporary file.
Several ways exist to avoid the above logic flow entirely:
* Use `pipelining`. When pipelining is enabled, Ansible does not save the
module to a temporary file on the client. Instead it pipes the module to
the remote python interpreter's stdin. Pipelining does not work for
python modules involving file transfer (for example: :ref:`copy <copy_module>`,
:ref:`fetch <fetch_module>`, :ref:`template <template_module>`), or for non-python modules.
* Avoid becoming an unprivileged
user. Temporary files are protected by UNIX file permissions when you
``become`` root or do not use ``become``. In Ansible 2.1 and above, UNIX
file permissions are also secure if you make the connection to the managed
machine as root and then use ``become`` to access an unprivileged account.
.. warning:: Although the Solaris ZFS filesystem has filesystem ACLs, the ACLs
are not POSIX.1e filesystem acls (they are NFSv4 ACLs instead). Ansible
cannot use these ACLs to manage its temp file permissions so you may have
to resort to ``allow_world_readable_tmpfiles`` if the remote machines use ZFS.
.. versionchanged:: 2.1
Ansible makes it hard to unknowingly use ``become`` insecurely. Starting in Ansible 2.1,
Ansible defaults to issuing an error if it cannot execute securely with ``become``.
If you cannot use pipelining or POSIX ACLs, must connect as an unprivileged user,
must use ``become`` to execute as a different unprivileged user,
and decide that your managed nodes are secure enough for the
modules you want to run there to be world readable, you can turn on
``allow_world_readable_tmpfiles`` in the :file:`ansible.cfg` file. Setting
``allow_world_readable_tmpfiles`` will change this from an error into
a warning and allow the task to run as it did prior to 2.1.
.. versionchanged:: 2.10
Ansible 2.10 introduces the above-mentioned ``ansible_common_remote_group``
fallback. As mentioned above, if enabled, it is used when ``remote_user`` and
``become_user`` are both unprivileged users. Refer to the text above for details
on when this fallback happens.
.. warning:: As mentioned above, if ``ansible_common_remote_group`` and
``allow_world_readable_tmpfiles`` are both enabled, it is unlikely that the
world-readable fallback will ever trigger, and yet Ansible might still be
unable to access the module file. This is because after the group ownership
change is successful, Ansible does not fall back any further, and also does
not do any check to ensure that the ``become_user`` is actually a member of
the "common group". This is a design decision made by the fact that doing
such a check would require another round-trip connection to the remote
machine, which is a time-expensive operation. Ansible does, however, emit a
warning in this case.
Not supported by all connection plugins
---------------------------------------
Privilege escalation methods must also be supported by the connection plugin
used. Most connection plugins will warn if they do not support become. Some
will just ignore it as they always run as root (jail, chroot, and so on).
Only one method may be enabled per host
---------------------------------------
Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
you need to have privileges to run the command as that user in sudo or be able
to su directly to it (the same for pbrun, pfexec or other supported methods).
Privilege escalation must be general
------------------------------------
You cannot limit privilege escalation permissions to certain commands.
Ansible does not always
use a specific command to do something but runs modules (code) from
a temporary file name which changes every time. If you have '/sbin/service'
or '/bin/chmod' as the allowed commands this will fail with ansible as those
paths won't match with the temporary file that Ansible creates to run the
module. If you have security rules that constrain your sudo/pbrun/doas environment
to running specific command paths only, use Ansible from a special account that
does not have this constraint, or use AWX or the :ref:`ansible_platform` to manage indirect access to SSH credentials.
May not access environment variables populated by pamd_systemd
--------------------------------------------------------------
For most Linux distributions using ``systemd`` as their init, the default
methods used by ``become`` do not open a new "session", in the sense of
systemd. Because the ``pam_systemd`` module will not fully initialize a new
session, you might have surprises compared to a normal session opened through
ssh: some environment variables set by ``pam_systemd``, most notably
``XDG_RUNTIME_DIR``, are not populated for the new user and instead inherited
or just emptied.
This might cause trouble when trying to invoke systemd commands that depend on
``XDG_RUNTIME_DIR`` to access the bus:
.. code-block:: console
$ echo $XDG_RUNTIME_DIR
$ systemctl --user status
Failed to connect to bus: Permission denied
To force ``become`` to open a new systemd session that goes through
``pam_systemd``, you can use ``become_method: machinectl``.
For more information, see `this systemd issue
<https://github.com/systemd/systemd/issues/825#issuecomment-127917622>`_.
.. _become_network:
Become and network automation
=============================
As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all Ansible-maintained network platforms that support ``enable`` mode. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary.
You must set the connection type to either ``connection: ansible.netcommon.network_cli`` or ``connection: ansible.netcommon.httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` documentation for details.
You can use escalated privileges on only the specific tasks that need them, on an entire play, or on all plays. Adding ``become: yes`` and ``become_method: enable`` instructs Ansible to enter ``enable`` mode before executing the task, play, or playbook where those parameters are set.
If you see this error message, the task that generated it requires ``enable`` mode to succeed:
.. code-block:: console
Invalid input (privileged mode required)
To set ``enable`` mode for a specific task, add ``become`` at the task level:
.. code-block:: yaml
- name: Gather facts (eos)
arista.eos.eos_facts:
gather_subset:
- "!hardware"
become: yes
become_method: enable
To set enable mode for all tasks in a single play, add ``become`` at the play level:
.. code-block:: yaml
- hosts: eos-switches
become: yes
become_method: enable
tasks:
- name: Gather facts (eos)
arista.eos.eos_facts:
gather_subset:
- "!hardware"
Setting enable mode for all tasks
---------------------------------
Often you wish for all tasks in all plays to run using privilege mode, that is best achieved by using ``group_vars``:
**group_vars/eos.yml**
.. code-block:: yaml
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: arista.eos.eos
ansible_user: myuser
ansible_become: yes
ansible_become_method: enable
Passwords for enable mode
^^^^^^^^^^^^^^^^^^^^^^^^^
If you need a password to enter ``enable`` mode, you can specify it in one of two ways:
* providing the :option:`--ask-become-pass <ansible-playbook --ask-become-pass>` command line option
* setting the ``ansible_become_password`` connection variable
.. warning::
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see :ref:`vault`.
authorize and auth_pass
-----------------------
Ansible still supports ``enable`` mode with ``connection: local`` for legacy network playbooks. To enter ``enable`` mode with ``connection: local``, use the module options ``authorize`` and ``auth_pass``:
.. code-block:: yaml
- hosts: eos-switches
ansible_connection: local
tasks:
- name: Gather facts (eos)
eos_facts:
gather_subset:
- "!hardware"
provider:
authorize: yes
auth_pass: " {{ secret_auth_pass }}"
We recommend updating your playbooks to use ``become`` for network-device ``enable`` mode consistently. The use of ``authorize`` and of ``provider`` dictionaries will be deprecated in future. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details.
.. _become_windows:
Become and Windows
==================
Since Ansible 2.3, ``become`` can be used on Windows hosts through the
``runas`` method. Become on Windows uses the same inventory setup and
invocation arguments as ``become`` on a non-Windows host, so the setup and
variable names are the same as what is defined in this document.
While ``become`` can be used to assume the identity of another user, there are other uses for
it with Windows hosts. One important use is to bypass some of the
limitations that are imposed when running on WinRM, such as constrained network
delegation or accessing forbidden system calls like the WUA API. You can use
``become`` with the same user as ``ansible_user`` to bypass these limitations
and run commands that are not normally accessible in a WinRM session.
Administrative rights
---------------------
Many tasks in Windows require administrative privileges to complete. When using
the ``runas`` become method, Ansible will attempt to run the module with the
full privileges that are available to the remote user. If it fails to elevate
the user token, it will continue to use the limited token during execution.
A user must have the ``SeDebugPrivilege`` to run a become process with elevated
privileges. This privilege is assigned to Administrators by default. If the
debug privilege is not available, the become process will run with a limited
set of privileges and groups.
To determine the type of token that Ansible was able to get, run the following
task:
.. code-block:: yaml
- Check my user name
ansible.windows.win_whoami:
become: yes
The output will look something similar to the below:
.. code-block:: ansible-output
ok: [windows] => {
"account": {
"account_name": "vagrant-domain",
"domain_name": "DOMAIN",
"sid": "S-1-5-21-3088887838-4058132883-1884671576-1105",
"type": "User"
},
"authentication_package": "Kerberos",
"changed": false,
"dns_domain_name": "DOMAIN.LOCAL",
"groups": [
{
"account_name": "Administrators",
"attributes": [
"Mandatory",
"Enabled by default",
"Enabled",
"Owner"
],
"domain_name": "BUILTIN",
"sid": "S-1-5-32-544",
"type": "Alias"
},
{
"account_name": "INTERACTIVE",
"attributes": [
"Mandatory",
"Enabled by default",
"Enabled"
],
"domain_name": "NT AUTHORITY",
"sid": "S-1-5-4",
"type": "WellKnownGroup"
},
],
"impersonation_level": "SecurityAnonymous",
"label": {
"account_name": "High Mandatory Level",
"domain_name": "Mandatory Label",
"sid": "S-1-16-12288",
"type": "Label"
},
"login_domain": "DOMAIN",
"login_time": "2018-11-18T20:35:01.9696884+00:00",
"logon_id": 114196830,
"logon_server": "DC01",
"logon_type": "Interactive",
"privileges": {
"SeBackupPrivilege": "disabled",
"SeChangeNotifyPrivilege": "enabled-by-default",
"SeCreateGlobalPrivilege": "enabled-by-default",
"SeCreatePagefilePrivilege": "disabled",
"SeCreateSymbolicLinkPrivilege": "disabled",
"SeDebugPrivilege": "enabled",
"SeDelegateSessionUserImpersonatePrivilege": "disabled",
"SeImpersonatePrivilege": "enabled-by-default",
"SeIncreaseBasePriorityPrivilege": "disabled",
"SeIncreaseQuotaPrivilege": "disabled",
"SeIncreaseWorkingSetPrivilege": "disabled",
"SeLoadDriverPrivilege": "disabled",
"SeManageVolumePrivilege": "disabled",
"SeProfileSingleProcessPrivilege": "disabled",
"SeRemoteShutdownPrivilege": "disabled",
"SeRestorePrivilege": "disabled",
"SeSecurityPrivilege": "disabled",
"SeShutdownPrivilege": "disabled",
"SeSystemEnvironmentPrivilege": "disabled",
"SeSystemProfilePrivilege": "disabled",
"SeSystemtimePrivilege": "disabled",
"SeTakeOwnershipPrivilege": "disabled",
"SeTimeZonePrivilege": "disabled",
"SeUndockPrivilege": "disabled"
},
"rights": [
"SeNetworkLogonRight",
"SeBatchLogonRight",
"SeInteractiveLogonRight",
"SeRemoteInteractiveLogonRight"
],
"token_type": "TokenPrimary",
"upn": "[email protected]",
"user_flags": []
}
Under the ``label`` key, the ``account_name`` entry determines whether the user
has Administrative rights. Here are the labels that can be returned and what
they represent:
* ``Medium``: Ansible failed to get an elevated token and ran under a limited
token. Only a subset of the privileges assigned to user are available during
the module execution and the user does not have administrative rights.
* ``High``: An elevated token was used and all the privileges assigned to the
user are available during the module execution.
* ``System``: The ``NT AUTHORITY\System`` account is used and has the highest
level of privileges available.
The output will also show the list of privileges that have been granted to the
user. When the privilege value is ``disabled``, the privilege is assigned to
the logon token but has not been enabled. In most scenarios these privileges
are automatically enabled when required.
If running on a version of Ansible that is older than 2.5 or the normal
``runas`` escalation process fails, an elevated token can be retrieved by:
* Set the ``become_user`` to ``System`` which has full control over the
operating system.
* Grant ``SeTcbPrivilege`` to the user Ansible connects with on
WinRM. ``SeTcbPrivilege`` is a high-level privilege that grants
full control over the operating system. No user is given this privilege by
default, and care should be taken if you grant this privilege to a user or group.
For more information on this privilege, please see
`Act as part of the operating system <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn221957(v=ws.11)>`_.
You can use the below task to set this privilege on a Windows host:
.. code-block:: yaml
- name: grant the ansible user the SeTcbPrivilege right
ansible.windows.win_user_right:
name: SeTcbPrivilege
users: '{{ansible_user}}'
action: add
* Turn UAC off on the host and reboot before trying to become the user. UAC is
a security protocol that is designed to run accounts with the
``least privilege`` principle. You can turn UAC off by running the following
tasks:
.. code-block:: yaml
- name: turn UAC off
win_regedit:
path: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
name: EnableLUA
data: 0
type: dword
state: present
register: uac_result
- name: reboot after disabling UAC
win_reboot:
when: uac_result is changed
.. Note:: Granting the ``SeTcbPrivilege`` or turning UAC off can cause Windows
security vulnerabilities and care should be given if these steps are taken.
Local service accounts
----------------------
Prior to Ansible version 2.5, ``become`` only worked on Windows with a local or domain
user account. Local service accounts like ``System`` or ``NetworkService``
could not be used as ``become_user`` in these older versions. This restriction
has been lifted since the 2.5 release of Ansible. The three service accounts
that can be set under ``become_user`` are:
* System
* NetworkService
* LocalService
Because local service accounts do not have passwords, the
``ansible_become_password`` parameter is not required and is ignored if
specified.
Become without setting a password
---------------------------------
As of Ansible 2.8, ``become`` can be used to become a Windows local or domain account
without requiring a password for that account. For this method to work, the
following requirements must be met:
* The connection user has the ``SeDebugPrivilege`` privilege assigned
* The connection user is part of the ``BUILTIN\Administrators`` group
* The ``become_user`` has either the ``SeBatchLogonRight`` or ``SeNetworkLogonRight`` user right
Using become without a password is achieved in one of two different methods:
* Duplicating an existing logon session's token if the account is already logged on
* Using S4U to generate a logon token that is valid on the remote host only
In the first scenario, the become process is spawned from another logon of that
user account. This could be an existing RDP logon, console logon, but this is
not guaranteed to occur all the time. This is similar to the
``Run only when user is logged on`` option for a Scheduled Task.
In the case where another logon of the become account does not exist, S4U is
used to create a new logon and run the module through that. This is similar to
the ``Run whether user is logged on or not`` with the ``Do not store password``
option for a Scheduled Task. In this scenario, the become process will not be
able to access any network resources like a normal WinRM process.
To make a distinction between using become with no password and becoming an
account that has no password make sure to keep ``ansible_become_password`` as
undefined or set ``ansible_become_password:``.
.. Note:: Because there are no guarantees an existing token will exist for a
user when Ansible runs, there's a high change the become process will only
have access to local resources. Use become with a password if the task needs
to access network resources
Accounts without a password
---------------------------
.. Warning:: As a general security best practice, you should avoid allowing accounts without passwords.
Ansible can be used to become a Windows account that does not have a password (like the
``Guest`` account). To become an account without a password, set up the
variables like normal but set ``ansible_become_password: ''``.
Before become can work on an account like this, the local policy
`Accounts: Limit local account use of blank passwords to console logon only <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852174(v=ws.11)>`_
must be disabled. This can either be done through a Group Policy Object (GPO)
or with this Ansible task:
.. code-block:: yaml
- name: allow blank password on become
ansible.windows.win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\Lsa
name: LimitBlankPasswordUse
data: 0
type: dword
state: present
.. Note:: This is only for accounts that do not have a password. You still need
to set the account's password under ``ansible_become_password`` if the
become_user has a password.
Become flags for Windows
------------------------
Ansible 2.5 added the ``become_flags`` parameter to the ``runas`` become method.
This parameter can be set using the ``become_flags`` task directive or set in
Ansible's configuration using ``ansible_become_flags``. The two valid values
that are initially supported for this parameter are ``logon_type`` and
``logon_flags``.
.. Note:: These flags should only be set when becoming a normal user account, not a local service account like LocalSystem.
The key ``logon_type`` sets the type of logon operation to perform. The value
can be set to one of the following:
* ``interactive``: The default logon type. The process will be run under a
context that is the same as when running a process locally. This bypasses all
WinRM restrictions and is the recommended method to use.
* ``batch``: Runs the process under a batch context that is similar to a
scheduled task with a password set. This should bypass most WinRM
restrictions and is useful if the ``become_user`` is not allowed to log on
interactively.
* ``new_credentials``: Runs under the same credentials as the calling user, but
outbound connections are run under the context of the ``become_user`` and
``become_password``, similar to ``runas.exe /netonly``. The ``logon_flags``
flag should also be set to ``netcredentials_only``. Use this flag if
the process needs to access a network resource (like an SMB share) using a
different set of credentials.
* ``network``: Runs the process under a network context without any cached
credentials. This results in the same type of logon session as running a
normal WinRM process without credential delegation, and operates under the same
restrictions.
* ``network_cleartext``: Like the ``network`` logon type, but instead caches
the credentials so it can access network resources. This is the same type of
logon session as running a normal WinRM process with credential delegation.
For more information, see
`dwLogonType <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-logonusera>`_.
The ``logon_flags`` key specifies how Windows will log the user on when creating
the new process. The value can be set to none or multiple of the following:
* ``with_profile``: The default logon flag set. The process will load the
user's profile in the ``HKEY_USERS`` registry key to ``HKEY_CURRENT_USER``.
* ``netcredentials_only``: The process will use the same token as the caller
but will use the ``become_user`` and ``become_password`` when accessing a remote
resource. This is useful in inter-domain scenarios where there is no trust
relationship, and should be used with the ``new_credentials`` ``logon_type``.
By default ``logon_flags=with_profile`` is set, if the profile should not be
loaded set ``logon_flags=`` or if the profile should be loaded with
``netcredentials_only``, set ``logon_flags=with_profile,netcredentials_only``.
For more information, see `dwLogonFlags <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-createprocesswithtokenw>`_.
Here are some examples of how to use ``become_flags`` with Windows tasks:
.. code-block:: yaml
- name: copy a file from a fileshare with custom credentials
ansible.windows.win_copy:
src: \\server\share\data\file.txt
dest: C:\temp\file.txt
remote_src: yes
vars:
ansible_become: yes
ansible_become_method: runas
ansible_become_user: DOMAIN\user
ansible_become_password: Password01
ansible_become_flags: logon_type=new_credentials logon_flags=netcredentials_only
- name: run a command under a batch logon
ansible.windows.win_whoami:
become: yes
become_flags: logon_type=batch
- name: run a command and not load the user profile
ansible.windows.win_whomai:
become: yes
become_flags: logon_flags=
Limitations of become on Windows
--------------------------------
* Running a task with ``async`` and ``become`` on Windows Server 2008, 2008 R2
and Windows 7 only works when using Ansible 2.7 or newer.
* By default, the become user logs on with an interactive session, so it must
have the right to do so on the Windows host. If it does not inherit the
``SeAllowLogOnLocally`` privilege or inherits the ``SeDenyLogOnLocally``
privilege, the become process will fail. Either add the privilege or set the
``logon_type`` flag to change the logon type used.
* Prior to Ansible version 2.3, become only worked when
``ansible_winrm_transport`` was either ``basic`` or ``credssp``. This
restriction has been lifted since the 2.4 release of Ansible for all hosts
except Windows Server 2008 (non R2 version).
* The Secondary Logon service ``seclogon`` must be running to use ``ansible_become_method: runas``
Resolving Temporary File Error Messsages
----------------------------------------
"Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user"
* This error can be resolved by installing the package that provides the ``setfacl`` command. (This is frequently the ``acl`` package but check your OS documentation.
.. seealso::
`Mailing List <https://groups.google.com/forum/#!forum/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,621 |
module tempfile.py causing circular import when following module hacking docs
|
### Summary
I am trying to run my_test.py by following the documentation in [1]. This fails with this error:
```
╰─➤ python -m ansible.modules.my_test /tmp/args.json
Traceback (most recent call last):
File "/usr/lib/python3.8/[runpy.py](http://runpy.py/)", line 194, in \_run\_module\_as\_main
return \_run\_code(code, main\_globals, None,
File "/usr/lib/python3.8/[runpy.py](http://runpy.py/)", line 87, in \_run\_code
exec(code, run\_globals)
File "/home/dohlemacher/cloud/ansible/lib/ansible/modules/[david.py](http://david.py/)", line 72, in \<module>
from ansible.module\_utils.basic import AnsibleModule
File "/home/dohlemacher/cloud/ansible/lib/ansible/module\_utils/[basic.py](http://basic.py/)", line 53, in \<module>
import tempfile
File "/home/dohlemacher/cloud/ansible/lib/ansible/modules/[tempfile.py](http://tempfile.py/)", line 79, in \<module>
from tempfile import mkstemp, mkdtemp
ImportError: cannot import name 'mkstemp' from partially initialized module 'tempfile' (most likely due to a circular import) (/home/dohlemacher/cloud/ansible/lib/ansible/modules/[tempfile.py](http://tempfile.py/))
```
I found a closed ticket [2] that suggests renaming `modules/tempfile.py`. I did this. It fixes my circular dependency.
With this hack of the ansible modules, the module runs:
```
╰─➤ python -m ansible.modules.my_test /tmp/args.json
{"changed": true, "original_message": "hello", "message": "goodbye", "invocation": {"module_args": {"name": "hello", "new": true}}}
```
[1] https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#verifying-your-module-code-locally
[2] https://github.com/ansible/ansible/issues/25333
### Issue Type
Documentation Report
### Component Name
module hacking
### Ansible Version
```console
╰─➤ ansible --version
ansible 2.10.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dohlemacher/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dohlemacher/cloud/ansible/lib/ansible
executable location = /home/dohlemacher/cloud/ansible/bin/ansible
python version = 3.8.12 (default, Jan 15 2022, 18:39:47) [GCC 7.5.0]
This also happens on the head of devel, but I tried release tag 2.10.9.
```
╰─➤ git show-ref | grep devel
d06d99cbb6a61db043436092663de8b20f6d2017 refs/heads/devel
d06d99cbb6a61db043436092663de8b20f6d2017 refs/remotes/origin/devel
c6d7baeec5bca7f02fc4ddbb7052f8de081f9cdf refs/remotes/origin/temp-2.10-devel
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_SSH_ARGS(env: ANSIBLE_SSH_ARGS) =
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
(END)
```
### OS / Environment
```
╰─➤ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
```
### Steps to Reproduce
Just follow https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#creating-a-module
### Expected Results
```
╰─➤ python -m ansible.modules.my_test /tmp/args.json
{"changed": true, "original_message": "hello", "message": "goodbye", "invocation": {"module_args": {"name": "hello", "new": true}}}
```
### Actual Results
```console
╰─➤ python -m ansible.modules.my_test /tmp/args.json
Traceback (most recent call last):
File "/usr/lib/python3.8/[runpy.py](http://runpy.py/)", line 194, in \_run\_module\_as\_main
return \_run\_code(code, main\_globals, None,
File "/usr/lib/python3.8/[runpy.py](http://runpy.py/)", line 87, in \_run\_code
exec(code, run\_globals)
File "/home/dohlemacher/cloud/ansible/lib/ansible/modules/[david.py](http://david.py/)", line 72, in \<module>
from ansible.module\_utils.basic import AnsibleModule
File "/home/dohlemacher/cloud/ansible/lib/ansible/module\_utils/[basic.py](http://basic.py/)", line 53, in \<module>
import tempfile
File "/home/dohlemacher/cloud/ansible/lib/ansible/modules/[tempfile.py](http://tempfile.py/)", line 79, in \<module>
from tempfile import mkstemp, mkdtemp
ImportError: cannot import name 'mkstemp' from partially initialized module 'tempfile' (most likely due to a circular import) (/home/dohlemacher/cloud/ansible/lib/ansible/modules/[tempfile.py](http://tempfile.py/))
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77621
|
https://github.com/ansible/ansible/pull/77647
|
dd094a4413ff7f8af018f105c296005675215236
|
255745b3e6460558d7fae4492e14b570c0569631
| 2022-04-24T14:19:15Z |
python
| 2022-05-05T14:35:05Z |
docs/docsite/rst/dev_guide/developing_modules_general.rst
|
.. _developing_modules_general:
.. _module_dev_tutorial_sample:
**************************
Developing Ansible modules
**************************
A module is a reusable, standalone script that Ansible runs on your behalf, either locally or remotely. Modules interact with your local machine, an API, or a remote system to perform specific tasks like changing a database password or spinning up a cloud instance. Each module can be used by the Ansible API, or by the :command:`ansible` or :command:`ansible-playbook` programs. A module provides a defined interface, accepts arguments, and returns information to Ansible by printing a JSON string to stdout before exiting.
If you need functionality that is not available in any of the thousands of Ansible modules found in collections, you can easily write your own custom module. When you write a module for local use, you can choose any programming language and follow your own rules. Use this topic to learn how to create an Ansible module in Python. After you create a module, you must add it locally to the appropriate directory so that Ansible can find and execute it. For details about adding a module locally, see :ref:`developing_locally`.
.. contents::
:local:
.. _environment_setup:
Preparing an environment for developing Ansible modules
=======================================================
Installing prerequisites via apt (Ubuntu)
-----------------------------------------
Due to dependencies (for example ansible -> paramiko -> pynacl -> libffi):
.. code:: bash
sudo apt update
sudo apt install build-essential libssl-dev libffi-dev python-dev
Installing prerequisites via yum (CentOS)
-----------------------------------------
Due to dependencies (for example ansible -> paramiko -> pynacl -> libffi):
.. code:: bash
sudo yum check-update
sudo yum update
sudo yum group install "Development Tools"
sudo yum install python3-devel openssl-devel libffi libffi-devel
Creating a development environment (platform-independent steps)
---------------------------------------------------------------
1. Clone the Ansible repository:
``$ git clone https://github.com/ansible/ansible.git``
2. Change directory into the repository root dir: ``$ cd ansible``
3. Create a virtual environment: ``$ python3 -m venv venv`` (or for
Python 2 ``$ virtualenv venv``. Note, this requires you to install
the virtualenv package: ``$ pip install virtualenv``)
4. Activate the virtual environment: ``$ . venv/bin/activate``
5. Install development requirements:
``$ pip install -r requirements.txt``
6. Run the environment setup script for each new development shell process:
``$ . hacking/env-setup``
.. note:: After the initial setup above, every time you are ready to start
developing Ansible you should be able to just run the following from the
root of the Ansible repo:
``$ . venv/bin/activate && . hacking/env-setup``
Creating an info or a facts module
==================================
Ansible gathers information about the target machines using facts modules, and gathers information on other objects or files using info modules.
If you find yourself trying to add ``state: info`` or ``state: list`` to an existing module, that is often a sign that a new dedicated ``_facts`` or ``_info`` module is needed.
In Ansible 2.8 and onwards, we have two type of information modules, they are ``*_info`` and ``*_facts``.
If a module is named ``<something>_facts``, it should be because its main purpose is returning ``ansible_facts``. Do not name modules that do not do this with ``_facts``.
Only use ``ansible_facts`` for information that is specific to the host machine, for example network interfaces and their configuration, which operating system and which programs are installed.
Modules that query/return general information (and not ``ansible_facts``) should be named ``_info``.
General information is non-host specific information, for example information on online/cloud services (you can access different accounts for the same online service from the same host), or information on VMs and containers accessible from the machine, or information on individual files or programs.
Info and facts modules, are just like any other Ansible Module, with a few minor requirements:
1. They MUST be named ``<something>_info`` or ``<something>_facts``, where <something> is singular.
2. Info ``*_info`` modules MUST return in the form of the :ref:`result dictionary<common_return_values>` so other modules can access them.
3. Fact ``*_facts`` modules MUST return in the ``ansible_facts`` field of the :ref:`result dictionary<common_return_values>` so other modules can access them.
4. They MUST support :ref:`check_mode <check_mode_dry>`.
5. They MUST NOT make any changes to the system.
6. They MUST document the :ref:`return fields<return_block>` and :ref:`examples<examples_block>`.
To create an info module:
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``. If you are developing module using collection, ``$ cd plugins/modules/`` inside your collection development tree.
2. Create your new module file: ``$ touch my_test_info.py``.
3. Paste the content below into your new info module file. It includes the :ref:`required Ansible format and documentation <developing_modules_documenting>`, a simple :ref:`argument spec for declaring the module options <argument_spec>`, and some example code.
4. Modify and extend the code to do what you want your new info module to do. See the :ref:`programming tips <developing_modules_best_practices>` and :ref:`Python 3 compatibility <developing_python_3>` pages for pointers on writing clean and concise module code.
.. literalinclude:: ../../../../examples/scripts/my_test_info.py
:language: python
Use the same process to create a facts module.
.. literalinclude:: ../../../../examples/scripts/my_test_facts.py
:language: python
Creating a module
=================
To create a module:
1. Navigate to the correct directory for your new module: ``$ cd lib/ansible/modules/``. If you are developing a module in a :ref:`collection <developing_collections>`, ``$ cd plugins/modules/`` inside your collection development tree.
2. Create your new module file: ``$ touch my_test.py``.
3. Paste the content below into your new module file. It includes the :ref:`required Ansible format and documentation <developing_modules_documenting>`, a simple :ref:`argument spec for declaring the module options <argument_spec>`, and some example code.
4. Modify and extend the code to do what you want your new module to do. See the :ref:`programming tips <developing_modules_best_practices>` and :ref:`Python 3 compatibility <developing_python_3>` pages for pointers on writing clean and concise module code.
.. literalinclude:: ../../../../examples/scripts/my_test.py
:language: python
Verifying your module code
==========================
After you modify the sample code above to do what you want, you can try out your module.
Our :ref:`debugging tips <debugging_modules>` will help if you run into bugs as you verify your module code.
Verifying your module code locally
----------------------------------
If your module does not need to target a remote host, you can quickly and easily exercise your code locally like this:
- Create an arguments file, a basic JSON config file that passes parameters to your module so that you can run it. Name the arguments file ``/tmp/args.json`` and add the following content:
.. code:: json
{
"ANSIBLE_MODULE_ARGS": {
"name": "hello",
"new": true
}
}
- If you are using a virtual environment (which is highly recommended for
development) activate it: ``$ . venv/bin/activate``
- Set up the environment for development: ``$ . hacking/env-setup``
- Run your test module locally and directly:
``$ python -m ansible.modules.my_test /tmp/args.json``
This should return output like this:
.. code:: json
{"changed": true, "state": {"original_message": "hello", "new_message": "goodbye"}, "invocation": {"module_args": {"name": "hello", "new": true}}}
Verifying your module code in a playbook
----------------------------------------
The next step in verifying your new module is to consume it with an Ansible playbook.
- Create a playbook in any directory: ``$ touch testmod.yml``
- Add the following to the new playbook file:
.. code-block:: yaml
- name: test my new module
hosts: localhost
tasks:
- name: run the new module
my_test:
name: 'hello'
new: true
register: testout
- name: dump test output
debug:
msg: '{{ testout }}'
- Run the playbook and analyze the output: ``$ ansible-playbook ./testmod.yml``
Testing your newly-created module
=================================
The following two examples will get you started with testing your module code. Please review our :ref:`testing <developing_testing>` section for more detailed
information, including instructions for :ref:`testing module documentation <testing_module_documentation>`, adding :ref:`integration tests <testing_integration>`, and more.
.. note::
Every new module and plugin should have integration tests, even if the tests cannot be run on Ansible CI infrastructure.
In this case, the tests should be marked with the ``unsupported`` alias in `aliases file <https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/integration-aliases.html>`_.
Performing sanity tests
-----------------------
You can run through Ansible's sanity checks in a container:
``$ ansible-test sanity -v --docker --python 2.7 MODULE_NAME``
.. note::
Note that this example requires Docker to be installed and running. If you'd rather not use a container for this, you can choose to use ``--venv`` instead of ``--docker``.
Adding unit tests
-----------------
You can add unit tests for your module in ``./test/units/modules``. You must first set up your testing environment. In this example, we're using Python 3.5.
- Install the requirements (outside of your virtual environment): ``$ pip3 install -r ./test/lib/ansible_test/_data/requirements/units.txt``
- Run ``. hacking/env-setup``
- To run all tests do the following: ``$ ansible-test units --python 3.5``. If you are using a CI environment, these tests will run automatically.
.. note:: Ansible uses pytest for unit testing.
To run pytest against a single test module, you can run the following command. Ensure that you are providing the correct path of the test module:
``$ pytest -r a --cov=. --cov-report=html --fulltrace --color yes
test/units/modules/.../test/my_test.py``
Contributing back to Ansible
============================
If you would like to contribute to ``ansible-core`` by adding a new feature or fixing a bug, `create a fork <https://help.github.com/articles/fork-a-repo/>`_ of the ansible/ansible repository and develop against a new feature branch using the ``devel`` branch as a starting point. When you have a good working code change, you can submit a pull request to the Ansible repository by selecting your feature branch as a source and the Ansible devel branch as a target.
If you want to contribute a module to an :ref:`Ansible collection <contributing_maintained_collections>`, review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request.
The :ref:`Community Guide <ansible_community_guide>` covers how to open a pull request and what happens next.
Communication and development support
=====================================
Join the ``#ansible-devel`` chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) for discussions surrounding Ansible development.
For questions and discussions pertaining to using the Ansible product, join the ``#ansible`` channel.
To find other topic-specific chat channels, look at :ref:`Community Guide, Communicating <communication_irc>`.
Credit
======
Thank you to Thomas Stringer (`@trstringer <https://github.com/trstringer>`_) for contributing source
material for this topic.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,285 |
Bundling Playbooks within a Collection
|
### Summary
I want to distribute a playbook in one of my collections. The docs are stating that this is possible:
* [Developing collections #playboos-directory](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections_structure.html#playbooks-directory)
However I do not find an example of how this could be done:
* Above mentioned docs only show the directory sturcture, not where the playbook-file needs to be placed nor any examples.
* When using `ansible-galaxy collection init ...` a skeleton is created but it does not contain a `playbooks` directory where I could check how this is done by default.
* I found the right command to call in [this Pull Request](https://github.com/ansible/ansible/pull/67435), but there is no further info about the structure needed in the collection.
In the end I tried just putting a `test.yml` file containing a simple playbook into `namespace/collection/playbooks/` but when using that collection Ansible states that `ERROR! the playbook: namespace.collection.test could not be found`.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_collections_structure.rst
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=3600s
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['ansible.posix.profile_tasks']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/etc/ansible/ansible.cfg) = ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml', 'community.vmware.vmware_vm_inventory']
```
### OS / Environment
Ubuntu 20.04
### Additional Information
Having examples available would help to better understand how this feature is meant to work.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77285
|
https://github.com/ansible/ansible/pull/77747
|
18fe5a88352a460dcc4fbe49a478552c80a0f101
|
42086c14a36ccf3eb16b10350852ba1ffc2ed6d1
| 2022-03-15T13:28:46Z |
python
| 2022-05-06T15:30:42Z |
docs/docsite/rst/dev_guide/developing_collections_structure.rst
|
.. _collection_structure:
********************
Collection structure
********************
A collection is a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy and other tools need in order to package, build and publish the collection.
.. contents::
:local:
:depth: 2
Collection directories and files
================================
A collection can contain these directories and files:
.. code-block:: shell-session
collection/
├── docs/
├── galaxy.yml
├── meta/
│ └── runtime.yml
├── plugins/
│ ├── modules/
│ │ └── module1.py
│ ├── inventory/
│ └── .../
├── README.md
├── roles/
│ ├── role1/
│ ├── role2/
│ └── .../
├── playbooks/
│ ├── files/
│ ├── vars/
│ ├── templates/
│ └── tasks/
└── tests/
.. note::
* Ansible only accepts ``.md`` extensions for the :file:`README` file and any files in the :file:`/docs` folder.
* See the `ansible-collections <https://github.com/ansible-collections/>`_ GitHub Org for examples of collection structure.
* Not all directories are currently in use. Those are placeholders for future features.
.. _galaxy_yml:
galaxy.yml
----------
A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact. See :ref:`collections_galaxy_meta` for details.
.. _collections_doc_dir:
docs directory
--------------
Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on.
For certified collections, Automation Hub displays documents written in markdown in the main ``docs`` directory with no subdirectories. This will not display on https://docs.ansible.com.
For community collections included in the Ansible PyPI package, docs.ansible.com displays documents written in reStructuredText (.rst) in a docsite/rst/ subdirectory. Define the structure of your extra documentation in ``docs/docsite/extra-docs.yml``:
.. code-block:: yaml
---
sections:
- title: Scenario Guide
toctree:
- scenario_guide
The index page of the documentation for your collection displays the title you define in ``docs/docsite/extra-docs.yml`` with a link to your extra documentation. For an example, see the `community.docker collection repo <https://github.com/ansible-collections/community.docker/tree/main/docs/docsite>`_ and the `community.docker collection documentation <https://docs.ansible.com/ansible/latest/collections/community/docker/index.html>`_.
You can add extra links to your collection index page and plugin pages with the ``docs/docsite/links.yml`` file. This populates the links under `Description and Communications <https://docs.ansible.com/ansible/devel/collections/community/dns/index.html#plugins-in-community-dns>`_ headings as well as links at the end of the individual plugin pages. See the `collection_template links.yml file <https://github.com/ansible-collections/collection_template/blob/main/docs/docsite/links.yml>`_ for a complete description of the structure and use of this file to create links.
Plugin and module documentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Keep the specific documentation for plugins and modules embedded as Python docstrings. Use ``ansible-doc`` to view documentation for plugins inside a collection:
.. code-block:: bash
ansible-doc -t lookup my_namespace.my_collection.lookup1
The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the Galaxy namespace and ``my_collection`` is the collection name within that namespace.
.. note:: The Galaxy namespace of an Ansible collection is defined in the ``galaxy.yml`` file. It can be different from the GitHub organization or repository name.
.. _collections_plugin_dir:
plugins directory
-----------------
Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by most plugins by using their FQCN. This is a way to distribute modules, lookups, filters, and so on without having to import a role in every play.
Vars plugins are supported in collections as long as they require being explicitly enabled (using ``REQUIRES_ENABLED``) and they are included using their fully qualified collection name. See :ref:`enable_vars` and :ref:`developing_vars_plugins` for details. Cache plugins may be used in collections for fact caching, but are not supported for inventory plugins.
.. _collection_module_utils:
module_utils
^^^^^^^^^^^^
When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}``
The following example snippets show a Python and PowerShell module using both default Ansible ``module_utils`` and
those provided by a collection. In this example the namespace is ``community``, the collection is ``test_collection``.
In the Python example the ``module_util`` in question is called ``qradar`` such that the FQCN is
``community.test_collection.plugins.module_utils.qradar``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.community.test_collection.plugins.module_utils.qradar import QRadarRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
qradar_request = QRadarRequest(
module,
headers={"Content-Type": "application/json"},
not_rest_data_keys=['state']
)
Note that importing something from an ``__init__.py`` file requires using the file name:
.. code-block:: python
from ansible_collections.namespace.collection_name.plugins.callback.__init__ import CustomBaseClass
In the PowerShell example the ``module_util`` in question is called ``hyperv`` such that the FQCN is
``community.test_collection.plugins.module_utils.hyperv``:
.. code-block:: powershell
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -PowerShell ansible_collections.community.test_collection.plugins.module_utils.hyperv
$spec = @{
name = @{ required = $true; type = "str" }
state = @{ required = $true; choices = @("present", "absent") }
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
Invoke-HyperVFunction -Name $module.Params.name
$module.ExitJson()
.. _collections_roles_dir:
roles directory
----------------
Collection roles are mostly the same as existing roles, but with a couple of limitations:
- Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character.
- Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection.
The directory name of the role is used as the role name. Therefore, the directory name must comply with the above role name rules. The collection import into Galaxy will fail if a role name does not comply with these rules.
You can migrate 'traditional roles' into a collection but they must follow the rules above. You may need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories.
.. note::
For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value.
playbooks directory
--------------------
In prior releases, you could reference playbooks in this directory using the full path to the playbook file from the command line.
In ansible-core 2.11 and later, you can use the FQCN, ``namespace.collection.playbook`` (with or without extension), to reference the playbooks from the command line or from ``import_playbook``.
This will keep the playbook in 'collection context', as if you had added ``collections: [ namespace.collection ]`` to it.
You can have most of the subdirectories you would expect, such ``files/``, ``vars/`` or ``templates/`` but no ``roles/`` since those are handled already in the collection.
.. _developing_collections_tests_directory:
tests directory
----------------
Ansible Collections are tested much like Ansible itself, by using the `ansible-test` utility which is released as part of Ansible, version 2.9.0 and newer. Because Ansible Collections are tested using the same tooling as Ansible itself, via `ansible-test`, all Ansible developer documentation for testing is applicable for authoring Collections Tests with one key concept to keep in mind.
See :ref:`testing_collections` for specific information on how to test collections with ``ansible-test``.
When reading the :ref:`developing_testing` documentation, there will be content that applies to running Ansible from source code via a git clone, which is typical of an Ansible developer. However, it's not always typical for an Ansible Collection author to be running Ansible from source but instead from a stable release, and to create Collections it is not necessary to run Ansible from source. Therefore, when references of dealing with `ansible-test` binary paths, command completion, or environment variables are presented throughout the :ref:`developing_testing` documentation; keep in mind that it is not needed for Ansible Collection Testing because the act of installing the stable release of Ansible containing `ansible-test` is expected to setup those things for you.
.. _meta_runtime_yml:
meta directory and runtime.yml
------------------------------
A collection can store some additional metadata in a ``runtime.yml`` file in the collection's ``meta`` directory. The ``runtime.yml`` file supports the top level keys:
- *requires_ansible*:
The version of Ansible Core (ansible-core) required to use the collection. Multiple versions can be separated with a comma.
.. code:: yaml
requires_ansible: ">=2.10,<2.11"
.. note:: although the version is a `PEP440 Version Specifier <https://www.python.org/dev/peps/pep-0440/#version-specifiers>`_ under the hood, Ansible deviates from PEP440 behavior by truncating prerelease segments from the Ansible version. This means that Ansible 2.11.0b1 is compatible with something that ``requires_ansible: ">=2.11"``.
- *plugin_routing*:
Content in a collection that Ansible needs to load from another location or that has been deprecated/removed.
The top level keys of ``plugin_routing`` are types of plugins, with individual plugin names as subkeys.
To define a new location for a plugin, set the ``redirect`` field to another name.
To deprecate a plugin, use the ``deprecation`` field to provide a custom warning message and the removal version or date. If the plugin has been renamed or moved to a new location, the ``redirect`` field should also be provided. If a plugin is being removed entirely, ``tombstone`` can be used for the fatal error message and removal version or date.
.. code:: yaml
plugin_routing:
inventory:
kubevirt:
redirect: community.general.kubevirt
my_inventory:
tombstone:
removal_version: "2.0.0"
warning_text: my_inventory has been removed. Please use other_inventory instead.
modules:
my_module:
deprecation:
removal_date: "2021-11-30"
warning_text: my_module will be removed in a future release of this collection. Use another.collection.new_module instead.
redirect: another.collection.new_module
podman_image:
redirect: containers.podman.podman_image
module_utils:
ec2:
redirect: amazon.aws.ec2
util_dir.subdir.my_util:
redirect: namespace.name.my_util
- *import_redirection*
A mapping of names for Python import statements and their redirected locations.
.. code:: yaml
import_redirection:
ansible.module_utils.old_utility:
redirect: ansible_collections.namespace_name.collection_name.plugins.module_utils.new_location
- *action_groups*
A mapping of groups and the list of action plugin and module names they contain. They may also have a special 'metadata' dictionary in the list, which can be used to include actions from other groups.
.. code:: yaml
action_groups:
groupname:
# The special metadata dictionary. All action/module names should be strings.
- metadata:
extend_group:
- another.collection.groupname
- another_group
- my_action
another_group:
- my_module
- another.collection.another_module
.. seealso::
:ref:`distributing_collections`
Learn how to package and publish your collection
:ref:`contributing_maintained_collections`
Guidelines for contributing to selected collections
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,481 |
Failed to get Debian minor version
|
### Summary
When I run setup module to get Debian 10.7 distribution information, it reported ansible_distribution_version=10. The real version is in /etc/debian_version, which is 10.7.
```
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
$ ansible 10.117.16.248 -m shell -a "cat /etc/os-release"
10.117.16.248 | CHANGED | rc=0 >>
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ ansible 10.117.16.248 -m shell -a "cat /etc/debian_version"
10.117.16.248 | CHANGED | rc=0 >>
10.7
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.4
config file = /home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg
configured module search path = ['/home/qiz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_SSH_RETRIES(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = 5
DEFAULT_CALLBACK_PLUGIN_PATH(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphe
DEFAULT_CALLBACK_WHITELIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['timer']
DEFAULT_HOST_LIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphere-gos-vali
DISPLAY_SKIPPED_HOSTS(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
```
### OS / Environment
Debian 10.7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
I expected to get ansible_distribution_version=10.7 as below not 10.
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10.7"
},
"changed": false
}
### Actual Results
```console
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74481
|
https://github.com/ansible/ansible/pull/74721
|
6230244537b57fddf1bf822c74ffd0061eb240cc
|
524d30b8b0cc193900e04b99f629810cd2bb7dd5
| 2021-04-28T15:22:55Z |
python
| 2022-05-11T21:21:47Z |
changelogs/fragments/74481_debian_minor_version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,481 |
Failed to get Debian minor version
|
### Summary
When I run setup module to get Debian 10.7 distribution information, it reported ansible_distribution_version=10. The real version is in /etc/debian_version, which is 10.7.
```
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
$ ansible 10.117.16.248 -m shell -a "cat /etc/os-release"
10.117.16.248 | CHANGED | rc=0 >>
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ ansible 10.117.16.248 -m shell -a "cat /etc/debian_version"
10.117.16.248 | CHANGED | rc=0 >>
10.7
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.4
config file = /home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg
configured module search path = ['/home/qiz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_SSH_RETRIES(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = 5
DEFAULT_CALLBACK_PLUGIN_PATH(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphe
DEFAULT_CALLBACK_WHITELIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['timer']
DEFAULT_HOST_LIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphere-gos-vali
DISPLAY_SKIPPED_HOSTS(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
```
### OS / Environment
Debian 10.7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
I expected to get ansible_distribution_version=10.7 as below not 10.
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10.7"
},
"changed": false
}
### Actual Results
```console
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74481
|
https://github.com/ansible/ansible/pull/74721
|
6230244537b57fddf1bf822c74ffd0061eb240cc
|
524d30b8b0cc193900e04b99f629810cd2bb7dd5
| 2021-04-28T15:22:55Z |
python
| 2022-05-11T21:21:47Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
import ansible.module_utils.compat.typing as t
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/centos-release', 'name': 'CentOS'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/os-release', 'name': 'Amazon'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
if path == '/etc/os-release':
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
distribution_version = version.group(1)
amazon_facts['distribution_version'] = distribution_version
version_data = distribution_version.split(".")
if len(version_data) > 1:
major, minor = version_data
else:
major, minor = version_data[0], 'NA'
amazon_facts['distribution_major_version'] = major
amazon_facts['distribution_minor_version'] = minor
else:
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
elif 'UOS' in data or 'Uos' in data or 'uos' in data:
debian_facts['distribution'] = 'Uos'
release = re.search(r"VERSION_CODENAME=\"?([^\"]+)\"?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
elif 'Deepin' in data or 'deepin' in data:
debian_facts['distribution'] = 'Deepin'
release = re.search(r"VERSION_CODENAME=\"?([^\"]+)\"?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() == 'flatcar':
if not data:
return False, flatcar_facts
release = re.search("^GROUP=(.*)", data)
if release:
flatcar_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, flatcar_facts
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
def parse_distribution_file_CentOS(self, name, data, path, collected_facts):
centos_facts = {}
if 'CentOS Stream' in data:
centos_facts['distribution_release'] = 'Stream'
return True, centos_facts
if "TencentOS Server" in data:
centos_facts['distribution'] = 'TencentOS'
return True, centos_facts
return False, centos_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'RHEL', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler', 'AlmaLinux', 'Rocky', 'TencentOS',
'EuroLinux'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux', 'Uos', 'Deepin'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT|RC|PRERELEASE).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family']) # type: t.Set[str]
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,481 |
Failed to get Debian minor version
|
### Summary
When I run setup module to get Debian 10.7 distribution information, it reported ansible_distribution_version=10. The real version is in /etc/debian_version, which is 10.7.
```
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
$ ansible 10.117.16.248 -m shell -a "cat /etc/os-release"
10.117.16.248 | CHANGED | rc=0 >>
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ ansible 10.117.16.248 -m shell -a "cat /etc/debian_version"
10.117.16.248 | CHANGED | rc=0 >>
10.7
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.4
config file = /home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg
configured module search path = ['/home/qiz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_SSH_RETRIES(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = 5
DEFAULT_CALLBACK_PLUGIN_PATH(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphe
DEFAULT_CALLBACK_WHITELIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['timer']
DEFAULT_HOST_LIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphere-gos-vali
DISPLAY_SKIPPED_HOSTS(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
```
### OS / Environment
Debian 10.7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
I expected to get ansible_distribution_version=10.7 as below not 10.
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10.7"
},
"changed": false
}
### Actual Results
```console
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74481
|
https://github.com/ansible/ansible/pull/74721
|
6230244537b57fddf1bf822c74ffd0061eb240cc
|
524d30b8b0cc193900e04b99f629810cd2bb7dd5
| 2021-04-28T15:22:55Z |
python
| 2022-05-11T21:21:47Z |
test/units/module_utils/facts/system/distribution/fixtures/debian_10.json
|
{
"name": "Debian 10",
"distro": {
"codename": "buster",
"id": "debian",
"name": "Debian GNU/Linux",
"version": "10",
"version_best": "10",
"lsb_release_info": {
"distributor_id": "Debian",
"description": "Debian GNU/Linux 10 (buster)",
"release": "10",
"codename": "buster"
},
"os_release_info": {
"pretty_name": "Debian GNU/Linux 10 (buster)",
"name": "Debian GNU/Linux",
"version_id": "10",
"version": "10 (buster)",
"version_codename": "buster",
"id": "debian",
"home_url": "https://www.debian.org/",
"support_url": "https://www.debian.org/support",
"bug_report_url": "https://bugs.debian.org/",
"codename": "buster"
}
},
"input": {
"/etc/os-release": "PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"\nNAME=\"Debian GNU/Linux\"\nVERSION_ID=\"10\"\nVERSION=\"10 (buster)\"\nVERSION_CODENAME=buster\nID=debian\nHOME_URL=\"https://www.debian.org/\"\nSUPPORT_URL=\"https://www.debian.org/support\"\nBUG_REPORT_URL=\"https://bugs.debian.org/\"\n",
"/usr/lib/os-release": "PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"\nNAME=\"Debian GNU/Linux\"\nVERSION_ID=\"10\"\nVERSION=\"10 (buster)\"\nVERSION_CODENAME=buster\nID=debian\nHOME_URL=\"https://www.debian.org/\"\nSUPPORT_URL=\"https://www.debian.org/support\"\nBUG_REPORT_URL=\"https://bugs.debian.org/\"\n"
},
"platform.dist": ["debian", "10", "buster"],
"result": {
"distribution": "Debian",
"distribution_version": "10",
"distribution_release": "buster",
"distribution_major_version": "10",
"os_family": "Debian"
}
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,481 |
Failed to get Debian minor version
|
### Summary
When I run setup module to get Debian 10.7 distribution information, it reported ansible_distribution_version=10. The real version is in /etc/debian_version, which is 10.7.
```
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
$ ansible 10.117.16.248 -m shell -a "cat /etc/os-release"
10.117.16.248 | CHANGED | rc=0 >>
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ ansible 10.117.16.248 -m shell -a "cat /etc/debian_version"
10.117.16.248 | CHANGED | rc=0 >>
10.7
```
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible 2.10.4
config file = /home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg
configured module search path = ['/home/qiz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_SSH_RETRIES(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = 5
DEFAULT_CALLBACK_PLUGIN_PATH(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphe
DEFAULT_CALLBACK_WHITELIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['timer']
DEFAULT_HOST_LIST(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = ['/home/qiz/workspace/github/ansible-vsphere-gos-vali
DISPLAY_SKIPPED_HOSTS(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/qiz/workspace/github/ansible-vsphere-gos-validation/ansible.cfg) = False
```
### OS / Environment
Debian 10.7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
I expected to get ansible_distribution_version=10.7 as below not 10.
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10.7"
},
"changed": false
}
### Actual Results
```console
$ ansible 10.117.16.248 -m setup -a "filter=ansible_distribution*"
10.117.16.248 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "10",
"ansible_distribution_release": "buster",
"ansible_distribution_version": "10"
},
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74481
|
https://github.com/ansible/ansible/pull/74721
|
6230244537b57fddf1bf822c74ffd0061eb240cc
|
524d30b8b0cc193900e04b99f629810cd2bb7dd5
| 2021-04-28T15:22:55Z |
python
| 2022-05-11T21:21:47Z |
test/units/module_utils/facts/system/distribution/test_distribution_version.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import glob
import json
import os
import pytest
from itertools import product
from ansible.module_utils.six.moves import builtins
# the module we are actually testing (sort of)
from ansible.module_utils.facts.system.distribution import DistributionFactCollector
# to generate the testcase data, you can use the script gen_distribution_version_testcase.py in hacking/tests
TESTSETS = []
for datafile in glob.glob(os.path.join(os.path.dirname(__file__), 'fixtures/*.json')):
with open(os.path.join(os.path.dirname(__file__), '%s' % datafile)) as f:
TESTSETS.append(json.loads(f.read()))
@pytest.mark.parametrize("stdin, testcase", product([{}], TESTSETS), ids=lambda x: x.get('name'), indirect=['stdin'])
def test_distribution_version(am, mocker, testcase):
"""tests the distribution parsing code of the Facts class
testsets have
* a name (for output/debugging only)
* input files that are faked
* those should be complete and also include "irrelevant" files that might be mistaken as coming from other distributions
* all files that are not listed here are assumed to not exist at all
* the output of ansible.module_utils.distro.linux_distribution() [called platform.dist() for historical reasons]
* results for the ansible variables distribution* and os_family
"""
# prepare some mock functions to get the testdata in
def mock_get_file_content(fname, default=None, strip=True):
"""give fake content if it exists, otherwise pretend the file is empty"""
data = default
if fname in testcase['input']:
# for debugging
print('faked %s for %s' % (fname, testcase['name']))
data = testcase['input'][fname].strip()
if strip and data is not None:
data = data.strip()
return data
def mock_get_uname(am, flags):
if '-v' in flags:
return testcase.get('uname_v', None)
elif '-r' in flags:
return testcase.get('uname_r', None)
else:
return None
def mock_file_exists(fname, allow_empty=False):
if fname not in testcase['input']:
return False
if allow_empty:
return True
return bool(len(testcase['input'][fname]))
def mock_platform_system():
return testcase.get('platform.system', 'Linux')
def mock_platform_release():
return testcase.get('platform.release', '')
def mock_platform_version():
return testcase.get('platform.version', '')
def mock_distro_name():
return testcase['distro']['name']
def mock_distro_id():
return testcase['distro']['id']
def mock_distro_version(best=False):
if best:
return testcase['distro']['version_best']
return testcase['distro']['version']
def mock_distro_codename():
return testcase['distro']['codename']
def mock_distro_os_release_info():
return testcase['distro']['os_release_info']
def mock_distro_lsb_release_info():
return testcase['distro']['lsb_release_info']
def mock_open(filename, mode='r'):
if filename in testcase['input']:
file_object = mocker.mock_open(read_data=testcase['input'][filename]).return_value
file_object.__iter__.return_value = testcase['input'][filename].splitlines(True)
else:
file_object = real_open(filename, mode)
return file_object
def mock_os_path_is_file(filename):
if filename in testcase['input']:
return True
return False
def mock_run_command_output(v, command):
ret = (0, '', '')
if 'command_output' in testcase:
ret = (0, testcase['command_output'].get(command, ''), '')
return ret
mocker.patch('ansible.module_utils.facts.system.distribution.get_file_content', mock_get_file_content)
mocker.patch('ansible.module_utils.facts.system.distribution.get_uname', mock_get_uname)
mocker.patch('ansible.module_utils.facts.system.distribution._file_exists', mock_file_exists)
mocker.patch('ansible.module_utils.distro.name', mock_distro_name)
mocker.patch('ansible.module_utils.distro.id', mock_distro_id)
mocker.patch('ansible.module_utils.distro.version', mock_distro_version)
mocker.patch('ansible.module_utils.distro.codename', mock_distro_codename)
mocker.patch(
'ansible.module_utils.common.sys_info.distro.os_release_info',
mock_distro_os_release_info)
mocker.patch(
'ansible.module_utils.common.sys_info.distro.lsb_release_info',
mock_distro_lsb_release_info)
mocker.patch('os.path.isfile', mock_os_path_is_file)
mocker.patch('platform.system', mock_platform_system)
mocker.patch('platform.release', mock_platform_release)
mocker.patch('platform.version', mock_platform_version)
mocker.patch('ansible.module_utils.basic.AnsibleModule.run_command', mock_run_command_output)
real_open = builtins.open
mocker.patch.object(builtins, 'open', new=mock_open)
# run Facts()
distro_collector = DistributionFactCollector()
generated_facts = distro_collector.collect(am)
# compare with the expected output
# testcase['result'] has a list of variables and values it expects Facts() to set
for key, val in testcase['result'].items():
assert key in generated_facts
msg = 'Comparing value of %s on %s, should: %s, is: %s' %\
(key, testcase['name'], val, generated_facts[key])
assert generated_facts[key] == val, msg
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,776 |
Quickstart video does not exist
|
### Summary
At the top of the page "Ansible Quickstart Guide"
https://docs.ansible.com/ansible/latest/user_guide/quickstart.html#quickstart-guide
There is a link to "The [quickstart video](https://www.ansible.com/resources/videos/quick-start-video) is about 13 minutes long and gives you a high level introduction to Ansible"
My expectation is that there will be a video on that page.
My reality is that there is no video on that page.
### Issue Type
Documentation Report
### Component Name
NA
### Ansible Version
```console
$ ansible --version
NA
```
### Configuration
```console
$ ansible-config dump --only-changed
NA
```
### OS / Environment
Chrome
### Additional Information
NA
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77776
|
https://github.com/ansible/ansible/pull/77779
|
654fff05e6f9ae51084dbc866ca30020aa960fb2
|
f75bb09a8b60870d640835f4309d318539fdef96
| 2022-05-11T15:12:31Z |
python
| 2022-05-12T14:34:45Z |
docs/docsite/rst/user_guide/quickstart.rst
|
.. _quickstart_guide:
Ansible Quickstart Guide
========================
We've recorded a short video that introduces Ansible.
The `quickstart video <https://www.ansible.com/resources/videos/quick-start-video>`_ is about 13 minutes long and gives you a high level
introduction to Ansible -- what it does and how to use it. We'll also tell you about other products in the Ansible ecosystem.
Enjoy, and be sure to visit the rest of the documentation to learn more.
.. seealso::
`A system administrators guide to getting started with Ansible <https://www.redhat.com/en/blog/system-administrators-guide-getting-started-ansible-fast>`_
A step by step introduction to Ansible
`Ansible Automation for SysAdmins <https://opensource.com/downloads/ansible-quickstart>`_
A downloadable guide for getting started with Ansible
:ref:`network_getting_started`
A guide for network engineers using Ansible for the first time
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,675 |
ansible-test sanity - Include which documentation block is being parsed when raising "Unknown DOCUMENTATION error"
|
### Summary
It would be helpful if the generic `Unknown DOCUMENTATION error` indicated which of the documentation blocks was being parsed the the time the exception was hit.
### Issue Type
Feature Idea
### Component Name
ansible-test
### Additional Information
Locally I run `ansible-test sanity --docker default -vv` using the devel branch to try and keep ahead of new sanity tests (I know this is expected to break from time to time, and I'm good with that).
I hit the following error:
```
ERROR: plugins/modules/ec2_snapshot_info.py:0:0: documentation-error: Unknown DOCUMENTATION error, see TRACE: while scanning a simple key
in "<unicode string>", line 72, column 13
could not find expected ':'
in "<unicode string>", line 73, column 13
```
Because I hadn't changed this module it took me some time to figure out where the failure was occurring, being able to narrow down which of the documentation blocks had some bad YAML would have saved me time. This is especially the case for these documentation tests because the line numbers aren't easy to trace.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77675
|
https://github.com/ansible/ansible/pull/77685
|
f5642cea28299857fd11aa77307f7ee04ef1edb3
|
0d6aa2e87ebd776fa6175616e8690b2a42fc119e
| 2022-04-28T09:07:44Z |
python
| 2022-05-13T16:18:31Z |
changelogs/fragments/doc_errors.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,675 |
ansible-test sanity - Include which documentation block is being parsed when raising "Unknown DOCUMENTATION error"
|
### Summary
It would be helpful if the generic `Unknown DOCUMENTATION error` indicated which of the documentation blocks was being parsed the the time the exception was hit.
### Issue Type
Feature Idea
### Component Name
ansible-test
### Additional Information
Locally I run `ansible-test sanity --docker default -vv` using the devel branch to try and keep ahead of new sanity tests (I know this is expected to break from time to time, and I'm good with that).
I hit the following error:
```
ERROR: plugins/modules/ec2_snapshot_info.py:0:0: documentation-error: Unknown DOCUMENTATION error, see TRACE: while scanning a simple key
in "<unicode string>", line 72, column 13
could not find expected ':'
in "<unicode string>", line 73, column 13
```
Because I hadn't changed this module it took me some time to figure out where the failure was occurring, being able to narrow down which of the documentation blocks had some bad YAML would have saved me time. This is especially the case for these documentation tests because the line numbers aren't easy to trace.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77675
|
https://github.com/ansible/ansible/pull/77685
|
f5642cea28299857fd11aa77307f7ee04ef1edb3
|
0d6aa2e87ebd776fa6175616e8690b2a42fc119e
| 2022-04-28T09:07:44Z |
python
| 2022-05-13T16:18:31Z |
lib/ansible/parsing/plugin_docs.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import os
import pyclbr
import tokenize
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_text, to_native
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.utils.display import Display
display = Display()
string_to_vars = {
'DOCUMENTATION': 'doc',
'EXAMPLES': 'plainexamples',
'RETURN': 'returndocs',
'ANSIBLE_METADATA': 'metadata', # NOTE: now unused, but kept for backwards compat
}
def _init_doc_dict():
''' initialize a return dict for docs with the expected structure '''
return {k: None for k in string_to_vars.values()}
def read_docstring_from_yaml_file(filename, verbose=True, ignore_errors=True):
''' Read docs from 'sidecar' yaml file doc for a plugin '''
global string_to_vars
data = _init_doc_dict()
file_data = {}
try:
with open(filename, 'rb') as yamlfile:
file_data = AnsibleLoader(yamlfile.read(), file_name=filename).get_single_data()
except Exception:
if verbose:
display.error("unable to parse %s" % filename)
if not ignore_errors:
raise
for key in string_to_vars:
data[string_to_vars[key]] = file_data.get(key, None)
return data
def _find_jinja_function(filename, verbose=True, ignore_errors=True):
# for finding filters/tests
module_name = os.path.splitext(os.path.basename(filename))[0]
paths = [os.path.dirname(filename)]
mdata = pyclbr.readmodule_ex(module_name, paths)
def read_docstring_from_python_module(filename, verbose=True, ignore_errors=True):
"""
Use tokenization to search for assignment of the documentation variables in the given file.
Parse from YAML and return the resulting python structure or None together with examples as plain text.
"""
found = 0
data = _init_doc_dict()
next_string = None
with tokenize.open(filename) as f:
tokens = tokenize.generate_tokens(f.readline)
for token in tokens:
if token.type == tokenize.NAME:
if token.string in string_to_vars:
next_string = string_to_vars[token.string]
if next_string is not None and token.type == tokenize.STRING:
found += 1
value = token.string
if value.startswith(('r', 'b')):
value = value.lstrip('rb')
if value.startswith(("'", '"')):
value = value.strip("'\"")
if next_string == 'plainexamples':
# keep as string
data[next_string] = to_text(value)
else:
try:
data[next_string] = AnsibleLoader(value, file_name=filename).get_single_data()
except Exception as e:
if ignore_errors:
if verbose:
display.error("unable to parse %s" % filename)
else:
raise
next_string = None
# if nothing else worked, fall back to old method
if not found:
data = read_docstring_from_python_file(filename, verbose, ignore_errors)
return data
def read_docstring_from_python_file(filename, verbose=True, ignore_errors=True):
"""
Use ast to search for assignment of the DOCUMENTATION and EXAMPLES variables in the given file.
Parse DOCUMENTATION from YAML and return the YAML doc or None together with EXAMPLES, as plain text.
"""
data = _init_doc_dict()
global string_to_vars
try:
with open(filename, 'rb') as b_module_data:
M = ast.parse(b_module_data.read())
for child in M.body:
if isinstance(child, ast.Assign):
for t in child.targets:
try:
theid = t.id
except AttributeError:
# skip errors can happen when trying to use the normal code
display.warning("Failed to assign id for %s on %s, skipping" % (t, filename))
continue
if theid in string_to_vars:
varkey = string_to_vars[theid]
if isinstance(child.value, ast.Dict):
data[varkey] = ast.literal_eval(child.value)
else:
if theid == 'EXAMPLES':
# examples 'can' be yaml, but even if so, we dont want to parse as such here
# as it can create undesired 'objects' that don't display well as docs.
data[varkey] = to_text(child.value.s)
else:
# string should be yaml if already not a dict
data[varkey] = AnsibleLoader(child.value.s, file_name=filename).get_single_data()
display.debug('assigned: %s' % varkey)
except Exception:
if verbose:
display.error("unable to parse %s" % filename)
if not ignore_errors:
raise
return data
def read_docstring(filename, verbose=True, ignore_errors=True):
''' returns a documentation dictionary from Ansible plugin docstrings '''
# TODO: ensure adjacency to code (including ps1 for py files)
if filename.endswith(C.YAML_DOC_EXTENSIONS):
docstring = read_docstring_from_yaml_file(filename, verbose=verbose, ignore_errors=ignore_errors)
elif filename.endswith(C.PYTHON_DOC_EXTENSIONS):
docstring = read_docstring_from_python_module(filename, verbose=verbose, ignore_errors=ignore_errors)
elif not ignore_errors:
raise AnsibleError("Unknown documentation format: %s" % to_native(filename))
if not docstring and not ignore_errors:
raise AnsibleError("Unable to parse documentation for: %s" % to_native(filename))
# cause seealso is specially processed from 'doc' later on
# TODO: stop any other 'overloaded' implementation in main doc
docstring['seealso'] = None
return docstring
def read_docstub(filename):
"""
Quickly find short_description using string methods instead of node parsing.
This does not return a full set of documentation strings and is intended for
operations like ansible-doc -l.
"""
in_documentation = False
capturing = False
indent_detection = ''
doc_stub = []
with open(filename, 'r') as t_module_data:
for line in t_module_data:
if in_documentation:
# start capturing the stub until indentation returns
if capturing and line.startswith(indent_detection):
doc_stub.append(line)
elif capturing and not line.startswith(indent_detection):
break
elif line.lstrip().startswith('short_description:'):
capturing = True
# Detect that the short_description continues on the next line if it's indented more
# than short_description itself.
indent_detection = ' ' * (len(line) - len(line.lstrip()) + 1)
doc_stub.append(line)
elif line.startswith('DOCUMENTATION') and ('=' in line or ':' in line):
in_documentation = True
short_description = r''.join(doc_stub).strip().rstrip('.')
data = AnsibleLoader(short_description, file_name=filename).get_single_data()
return data
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,788 |
lookup plugin broken when wantlist=False
|
### Summary
Copied from amazon.aws https://github.com/ansible-collections/amazon.aws/issues/633
https://github.com/ansible/ansible/pull/75587 appears to have broken amazon.aws.lookup_aws_account_attribute when running wantlist=False
### Issue Type
Bug Report
### Component Name
lib/ansible/template/__init__.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel f75bb09a8b) last updated 2022/05/12 17:31:50 (GMT -400)
config file = None
configured module search path = ['/home/josephtorcasso/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/josephtorcasso/633/ansible/lib/ansible
ansible collection location = /home/josephtorcasso/.ansible/collections:/usr/share/ansible/collections
executable location = /home/josephtorcasso/633/venv/bin/ansible
python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/josephtorcasso/633/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Running integration tests for ansible.aws lookup_aws_account_attribute
### Expected Results
Expected tests to pass using wantlist=False
### Actual Results
```console
TASK [lookup_aws_account_attribute : Fetch all account attributes (wantlist=False)] *******************************************************************************************************************************
task path: /root/ansible_collections/amazon/aws/tests/output/.tmp/integration/lookup_aws_account_attribute-bjtiq49h-ÅÑŚÌβŁÈ/tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml:50
The full traceback is:
Traceback (most recent call last):
File "/root/ansible/lib/ansible/executor/task_executor.py", line 503, in _execute
self._task.post_validate(templar=templar)
File "/root/ansible/lib/ansible/playbook/task.py", line 283, in post_validate
super(Task, self).post_validate(templar)
File "/root/ansible/lib/ansible/playbook/base.py", line 650, in post_validate
value = templar.template(getattr(self, name))
File "/root/ansible/lib/ansible/template/__init__.py", line 874, in template
d[k] = self.template(
File "/root/ansible/lib/ansible/template/__init__.py", line 842, in template
result = self.do_template(
File "/root/ansible/lib/ansible/template/__init__.py", line 1101, in do_template
res = ansible_concat(rf, convert_data, myenv.variable_start_string)
File "/root/ansible/lib/ansible/template/native_helpers.py", line 60, in ansible_concat
head = list(islice(nodes, 2))
File "<template>", line 13, in root
File "/usr/lib/python3.10/dist-packages/jinja2/runtime.py", line 349, in call
return __obj(*args, **kwargs)
File "/root/ansible/lib/ansible/template/__init__.py", line 1013, in _lookup
if isinstance(ran[0], NativeJinjaText):
KeyError: 0
fatal: [testhost]: FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77788
|
https://github.com/ansible/ansible/pull/77789
|
5e50284693cb5531eb4265a0ab94b35be89457f6
|
c9ce7d08a256646abdaccc80b480b8b9c2df9f1b
| 2022-05-13T01:16:52Z |
python
| 2022-05-17T16:24:53Z |
changelogs/fragments/77789-catch-keyerror-lookup-dict.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,788 |
lookup plugin broken when wantlist=False
|
### Summary
Copied from amazon.aws https://github.com/ansible-collections/amazon.aws/issues/633
https://github.com/ansible/ansible/pull/75587 appears to have broken amazon.aws.lookup_aws_account_attribute when running wantlist=False
### Issue Type
Bug Report
### Component Name
lib/ansible/template/__init__.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel f75bb09a8b) last updated 2022/05/12 17:31:50 (GMT -400)
config file = None
configured module search path = ['/home/josephtorcasso/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/josephtorcasso/633/ansible/lib/ansible
ansible collection location = /home/josephtorcasso/.ansible/collections:/usr/share/ansible/collections
executable location = /home/josephtorcasso/633/venv/bin/ansible
python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/josephtorcasso/633/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Running integration tests for ansible.aws lookup_aws_account_attribute
### Expected Results
Expected tests to pass using wantlist=False
### Actual Results
```console
TASK [lookup_aws_account_attribute : Fetch all account attributes (wantlist=False)] *******************************************************************************************************************************
task path: /root/ansible_collections/amazon/aws/tests/output/.tmp/integration/lookup_aws_account_attribute-bjtiq49h-ÅÑŚÌβŁÈ/tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml:50
The full traceback is:
Traceback (most recent call last):
File "/root/ansible/lib/ansible/executor/task_executor.py", line 503, in _execute
self._task.post_validate(templar=templar)
File "/root/ansible/lib/ansible/playbook/task.py", line 283, in post_validate
super(Task, self).post_validate(templar)
File "/root/ansible/lib/ansible/playbook/base.py", line 650, in post_validate
value = templar.template(getattr(self, name))
File "/root/ansible/lib/ansible/template/__init__.py", line 874, in template
d[k] = self.template(
File "/root/ansible/lib/ansible/template/__init__.py", line 842, in template
result = self.do_template(
File "/root/ansible/lib/ansible/template/__init__.py", line 1101, in do_template
res = ansible_concat(rf, convert_data, myenv.variable_start_string)
File "/root/ansible/lib/ansible/template/native_helpers.py", line 60, in ansible_concat
head = list(islice(nodes, 2))
File "<template>", line 13, in root
File "/usr/lib/python3.10/dist-packages/jinja2/runtime.py", line 349, in call
return __obj(*args, **kwargs)
File "/root/ansible/lib/ansible/template/__init__.py", line 1013, in _lookup
if isinstance(ran[0], NativeJinjaText):
KeyError: 0
fatal: [testhost]: FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77788
|
https://github.com/ansible/ansible/pull/77789
|
5e50284693cb5531eb4265a0ab94b35be89457f6
|
c9ce7d08a256646abdaccc80b480b8b9c2df9f1b
| 2022-05-13T01:16:52Z |
python
| 2022-05-17T16:24:53Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from collections.abc import Iterator, Sequence, Mapping, MappingView, MutableMapping
from contextlib import contextmanager
from hashlib import sha1
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a templating
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
original_key = key
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
key, leaf_key = get_fqcr_and_name(key)
seen = set()
while True:
if key in seen:
raise TemplateSyntaxError(
'recursive collection redirect found for %r' % original_key,
0
)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect = routing_entry.get('redirect', None)
if redirect:
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self._dirname, acr.collection, acr.resource, next_key))
key = next_key
else:
break
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
try:
method_map = getattr(plugin_impl, self._method_map_name)
func_items = method_map().items()
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
@_unroll_iterator
def _ansible_finalize(thing):
"""A custom finalize function for jinja2, which prevents None from being
returned. This avoids a string of ``"None"`` as ``None`` has no
importance in YAML.
The function is decorated with ``_unroll_iterator`` so that users are not
required to explicitly use ``|list`` to unroll a generator. This only
affects the scenario where the final result of templating
is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters
which can produce a generator in the middle of a template are already
wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``.
"""
return thing if thing is not None else ''
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
concat = staticmethod(ansible_eval_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
self.trim_blocks = True
self.undefined = AnsibleUndefined
self.finalize = _ansible_finalize
class AnsibleNativeEnvironment(AnsibleEnvironment):
concat = staticmethod(ansible_native_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.finalize = _unroll_iterator(lambda thing: thing)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
extensions=self._get_extensions(),
loader=FileSystemLoader(loader.get_basedir() if loader else '.'),
)
self.environment.template_class.environment_class = environment_class
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
if ':' not in pair:
raise AnsibleError("failed to parse jinja2 override '%s'."
" Did you use something different from colon as key-value separator?" % pair.strip())
(key, val) = pair.split(':', 1)
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
# In case this is a recursive call to do_template we need to
# save/restore cur_context to prevent overriding __UNSAFE__.
cached_context = self.cur_context
self.cur_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(self.cur_context)
try:
if not self.jinja2_native and not convert_data:
res = ansible_concat(rf)
else:
res = self.environment.concat(rf)
unsafe = getattr(self.cur_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
finally:
self.cur_context = cached_context
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,788 |
lookup plugin broken when wantlist=False
|
### Summary
Copied from amazon.aws https://github.com/ansible-collections/amazon.aws/issues/633
https://github.com/ansible/ansible/pull/75587 appears to have broken amazon.aws.lookup_aws_account_attribute when running wantlist=False
### Issue Type
Bug Report
### Component Name
lib/ansible/template/__init__.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel f75bb09a8b) last updated 2022/05/12 17:31:50 (GMT -400)
config file = None
configured module search path = ['/home/josephtorcasso/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/josephtorcasso/633/ansible/lib/ansible
ansible collection location = /home/josephtorcasso/.ansible/collections:/usr/share/ansible/collections
executable location = /home/josephtorcasso/633/venv/bin/ansible
python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/josephtorcasso/633/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Running integration tests for ansible.aws lookup_aws_account_attribute
### Expected Results
Expected tests to pass using wantlist=False
### Actual Results
```console
TASK [lookup_aws_account_attribute : Fetch all account attributes (wantlist=False)] *******************************************************************************************************************************
task path: /root/ansible_collections/amazon/aws/tests/output/.tmp/integration/lookup_aws_account_attribute-bjtiq49h-ÅÑŚÌβŁÈ/tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml:50
The full traceback is:
Traceback (most recent call last):
File "/root/ansible/lib/ansible/executor/task_executor.py", line 503, in _execute
self._task.post_validate(templar=templar)
File "/root/ansible/lib/ansible/playbook/task.py", line 283, in post_validate
super(Task, self).post_validate(templar)
File "/root/ansible/lib/ansible/playbook/base.py", line 650, in post_validate
value = templar.template(getattr(self, name))
File "/root/ansible/lib/ansible/template/__init__.py", line 874, in template
d[k] = self.template(
File "/root/ansible/lib/ansible/template/__init__.py", line 842, in template
result = self.do_template(
File "/root/ansible/lib/ansible/template/__init__.py", line 1101, in do_template
res = ansible_concat(rf, convert_data, myenv.variable_start_string)
File "/root/ansible/lib/ansible/template/native_helpers.py", line 60, in ansible_concat
head = list(islice(nodes, 2))
File "<template>", line 13, in root
File "/usr/lib/python3.10/dist-packages/jinja2/runtime.py", line 349, in call
return __obj(*args, **kwargs)
File "/root/ansible/lib/ansible/template/__init__.py", line 1013, in _lookup
if isinstance(ran[0], NativeJinjaText):
KeyError: 0
fatal: [testhost]: FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77788
|
https://github.com/ansible/ansible/pull/77789
|
5e50284693cb5531eb4265a0ab94b35be89457f6
|
c9ce7d08a256646abdaccc80b480b8b9c2df9f1b
| 2022-05-13T01:16:52Z |
python
| 2022-05-17T16:24:53Z |
test/integration/targets/templating_lookups/runme.sh
|
#!/usr/bin/env bash
set -eux
ANSIBLE_ROLES_PATH=./ UNICODE_VAR=café ansible-playbook runme.yml "$@"
ansible-playbook template_lookup_vaulted/playbook.yml --vault-password-file template_lookup_vaulted/test_vault_pass "$@"
ansible-playbook template_deepcopy/playbook.yml -i template_deepcopy/hosts "$@"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.