status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,955 |
Change --api-key to --token in ansible-galaxy collection commands
|
##### SUMMARY
In #65933, I had some confusion over how to set my Galaxy token for CLI usage when running `ansible-galaxy collection publish`, so I could avoid setting `--api-key` on the command line.
Some of that confusion could be avoided by using the same terminology when referring to the Galaxy 'token' everywhere. Currently the docs pages for Galaxy and Collections both mention 'token' multiple times, and only mention the term 'api-key' once, in reference to the Galaxy CLI.
Complicating matters, if I log into galaxy.ansible.com and go to my Preferences, I am shown an "API Key" section. So... it is even less clear whether I should be using a token or API key in various places.
I think, in the end, they are the same thing. But it would be more clear if we just call it one thing everywhere. Therefore, I propose we deprecate the `--api-key` parameter in the `ansible-galaxy` CLI, and use `--token`.
(And we should also change the UI in Galaxy to read 'Token' instead of 'API Key', maybe?)
Alternatively, if 'API Key' is the chosen nomenclature, let's use that everywhere, and ditch the use of 'token'...?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
galaxy
##### ADDITIONAL INFORMATION
See related: #65933
|
https://github.com/ansible/ansible/issues/65955
|
https://github.com/ansible/ansible/pull/66376
|
0ab0e1556b7380e0b432d0b9ce21849539befc8f
|
1f340721a7f651992e1cdb277c27d7da3bd09ced
| 2019-12-18T19:11:18Z |
python
| 2020-01-16T20:51:21Z |
changelogs/fragments/ansible-galaxy-cli-add-token-alias.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,955 |
Change --api-key to --token in ansible-galaxy collection commands
|
##### SUMMARY
In #65933, I had some confusion over how to set my Galaxy token for CLI usage when running `ansible-galaxy collection publish`, so I could avoid setting `--api-key` on the command line.
Some of that confusion could be avoided by using the same terminology when referring to the Galaxy 'token' everywhere. Currently the docs pages for Galaxy and Collections both mention 'token' multiple times, and only mention the term 'api-key' once, in reference to the Galaxy CLI.
Complicating matters, if I log into galaxy.ansible.com and go to my Preferences, I am shown an "API Key" section. So... it is even less clear whether I should be using a token or API key in various places.
I think, in the end, they are the same thing. But it would be more clear if we just call it one thing everywhere. Therefore, I propose we deprecate the `--api-key` parameter in the `ansible-galaxy` CLI, and use `--token`.
(And we should also change the UI in Galaxy to read 'Token' instead of 'API Key', maybe?)
Alternatively, if 'API Key' is the chosen nomenclature, let's use that everywhere, and ditch the use of 'token'...?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
galaxy
##### ADDITIONAL INFORMATION
See related: #65933
|
https://github.com/ansible/ansible/issues/65955
|
https://github.com/ansible/ansible/pull/66376
|
0ab0e1556b7380e0b432d0b9ce21849539befc8f
|
1f340721a7f651992e1cdb277c27d7da3bd09ced
| 2019-12-18T19:11:18Z |
python
| 2020-01-16T20:51:21Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from jinja2 import BaseLoader, Environment, FileSystemLoader
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import build_collection, install_collections, publish_collection, \
validate_collection_name
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self.api_servers = []
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to '
'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_login_options(role_parser, parents=[common])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each role installed in the roles_path.')
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument('role', help='Role', nargs='?', metavar='role')
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_login_options(self, parser, parents=None):
login_parser = parser.add_parser('login', parents=parents,
help="Login to api.github.com server in order to use ansible-galaxy role sub "
"command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=C.COLLECTIONS_PATHS[0],
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
else:
install_parser.add_argument('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False),
('auth_url', False)]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_key in server_list:
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=not context.CLIARGS['ignore_certs'])
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token))
context.CLIARGS['func']()
@property
def api(self):
return self.api_servers[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if requirements_file is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles', []):
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections', []):
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy, "explicit_requirement_%s" % req_name, req_source))
requirements['collections'].append((req_name, req_version, req_source))
else:
requirements['collections'].append((collection_req, '*', None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
text.append(u"\tdescription: %s" % role_info.get('description', ''))
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
def to_yaml(v):
return yaml.safe_dump(v, default_flow_style=False).rstrip()
env = Environment(loader=BaseLoader)
env.filters['comment_ify'] = comment_ify
env.filters['to_yaml'] = to_yaml
template = env.from_string(meta_template)
meta_value = template.render({'required_config': required_config, 'optional_config': optional_config})
return meta_value
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
template_env = Environment(loader=FileSystemLoader(obj_skeleton))
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
elif galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(rel_root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_env.get_template(src_template).stream(inject_data).dump(dest_file, encoding='utf-8')
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
# FIXME: This is broken in both 1.9 and 2.0 as
# _display_role_info() always returns something
if not data:
data = u"\n- the role %s was not found" % role
self.pager(data)
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections']
else:
requirements = []
for collection_input in collections:
requirement = None
if os.path.isfile(to_bytes(collection_input, errors='surrogate_or_strict')) or \
urlparse(collection_input).scheme.lower() in ['http', 'https']:
# Arg is a file path or URL to a collection
name = collection_input
else:
name, dummy, requirement = collection_input.partition(':')
requirements.append((name, requirement or '*', None))
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
if os.path.split(output_path)[1] != 'ansible_collections':
output_path = os.path.join(output_path, 'ansible_collections')
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps)
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
if not (role_file.endswith('.yaml') or role_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
roles_left = self._parse_requirements_file(role_file)['roles']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, self.api, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata.get('dependencies') or []
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
lists the roles installed on the local system or matches a single role passed as an argument.
"""
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
if context.CLIARGS['role']:
# show the requested role, if it exists
name = context.CLIARGS['role']
gr = GalaxyRole(self.galaxy, self.api, name)
if gr.metadata:
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
else:
display.display("- the role %s was not found" % name)
else:
# show all valid roles in the roles_path directory
roles_path = context.CLIARGS['roles_path']
path_found = False
warnings = []
for path in roles_path:
role_path = os.path.expanduser(path)
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
elif not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
path_found = True
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths was usable. Please specify a valid path with --roles-path")
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,568 |
eos_l3_interfaces: idempotent not working as expected for state=replaced
|
##### SUMMARY
- When a playbook with eos_l3_interfaces , stae= replaced, is run twice, changed is always 'true'
- In the second run, the configured ipv4 address is removed.
This is because, no ip address is always passed as part of commands.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_l3_interfaces.py
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### OS / ENVIRONMENT
Arista EOS
##### STEPS TO REPRODUCE
```
eos_l3_interfaces:
config:
- name : Ethernet1
ipv4:
- address: 203.1.1.1/24
- name: Ethernet2
ipv6:
- address: 5001::1/64
state: replaced
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Changed should be False in the second run.
Commands should be empty in the second run
##### ACTUAL RESULTS
```
First Run
===========
changed: [10.8.38.32] => {
"after": [
{
"ipv4": [
{
"address": "203.1.1.1/24"
}
],
"name": "Ethernet1"
},
{
"ipv6": [
{
"address": "5001::1/64"
}
],
"name": "Ethernet2"
},
{
"name": "Ethernet3"
},
{
"name": "Ethernet4"
},
{
"name": "Ethernet5"
},
{
"name": "Ethernet6"
},
{
"name": "Ethernet7"
},
{
"ipv4": [
{
"address": "10.8.38.32/24"
}
],
"name": "Management1"
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet1"
},
{
"ipv6": [
{
"address": "5001::1/64"
}
],
"name": "Ethernet2"
},
{
"name": "Ethernet3"
},
{
"name": "Ethernet4"
},
{
"name": "Ethernet5"
},
{
"name": "Ethernet6"
},
{
"name": "Ethernet7"
},
{
"ipv4": [
{
"address": "10.8.38.32/24"
}
],
"name": "Management1"
}
],
"changed": true,
"commands": [
"interface Ethernet2",
"no ip address",
"interface Ethernet1",
"ip address 203.1.1.1/24"
],
"invocation": {
"module_args": {
"config": [
{
"ipv4": [
{
"address": "203.1.1.1/24",
"secondary": null
}
],
"ipv6": null,
"name": "Ethernet1"
},
{
"ipv4": null,
"ipv6": [
{
"address": "5001::1/64"
}
],
"name": "Ethernet2"
}
],
"state": "replaced"
}
}
}
META: ran handlers
META: ran handlers
PLAY RECAP ***********************************************************************************************************************************************************************************
10.8.38.32 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Second Run:
==========
changed: [10.8.38.32] => {
"after": [
{
"name": "Ethernet1"
},
{
"ipv6": [
{
"address": "5001::1/64"
}
],
"name": "Ethernet2"
},
{
"name": "Ethernet3"
},
{
"name": "Ethernet4"
},
{
"name": "Ethernet5"
},
{
"name": "Ethernet6"
},
{
"name": "Ethernet7"
},
{
"ipv4": [
{
"address": "10.8.38.32/24"
}
],
"name": "Management1"
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"ipv4": [
{
"address": "203.1.1.1/24"
}
],
"name": "Ethernet1"
},
{
"ipv6": [
{
"address": "5001::1/64"
}
],
"name": "Ethernet2"
},
{
"name": "Ethernet3"
},
{
"name": "Ethernet4"
},
{
"name": "Ethernet5"
},
{
"name": "Ethernet6"
},
{
"name": "Ethernet7"
},
{
"ipv4": [
{
"address": "10.8.38.32/24"
}
],
"name": "Management1"
}
],
"changed": true,
"commands": [
"interface Ethernet2",
"no ip address",
"interface Ethernet1",
"no ip address"
],
"invocation": {
"module_args": {
"config": [
{
"ipv4": [
{
"address": "203.1.1.1/24",
"secondary": null
}
],
"ipv6": null,
"name": "Ethernet1"
},
{
"ipv4": null,
"ipv6": [
{
"address": "5001::1/64"
}
],
"name": "Ethernet2"
}
],
"state": "replaced"
}
}
}
META: ran handlers
META: ran handlers
PLAY RECAP ***********************************************************************************************************************************************************************************
10.8.38.32 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64568
|
https://github.com/ansible/ansible/pull/64621
|
7ddf7474d3cfc376bdb1da6151265dbc1db93c87
|
87fd93f14005c858e558209ec34ca401c19d445a
| 2019-11-07T16:31:53Z |
python
| 2020-01-17T20:23:02Z |
lib/ansible/module_utils/network/eos/config/l3_interfaces/l3_interfaces.py
|
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The eos_l3_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list, param_list_to_dict
from ansible.module_utils.network.eos.facts.facts import Facts
from ansible.module_utils.network.eos.utils.utils import normalize_interface
class L3_interfaces(ConfigBase):
"""
The eos_l3_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'l3_interfaces',
]
def get_l3_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
l3_interfaces_facts = facts['ansible_network_resources'].get('l3_interfaces')
if not l3_interfaces_facts:
return []
return l3_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
commands = list()
warnings = list()
existing_l3_interfaces_facts = self.get_l3_interfaces_facts()
commands.extend(self.set_config(existing_l3_interfaces_facts))
if commands:
if not self._module.check_mode:
self._connection.edit_config(commands)
result['changed'] = True
result['commands'] = commands
changed_l3_interfaces_facts = self.get_l3_interfaces_facts()
result['before'] = existing_l3_interfaces_facts
if result['changed']:
result['after'] = changed_l3_interfaces_facts
result['warnings'] = warnings
return result
def set_config(self, existing_l3_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_l3_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
state = self._module.params['state']
want = param_list_to_dict(want)
have = param_list_to_dict(have)
if state == 'overridden':
commands = self._state_overridden(want, have)
elif state == 'deleted':
commands = self._state_deleted(want, have)
elif state == 'merged':
commands = self._state_merged(want, have)
elif state == 'replaced':
commands = self._state_replaced(want, have)
return commands
@staticmethod
def _state_replaced(want, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for key, desired in want.items():
interface_name = normalize_interface(key)
if interface_name in have:
extant = have[interface_name]
else:
extant = dict()
intf_commands = set_interface(desired, extant)
intf_commands.extend(clear_interface(desired, extant))
if intf_commands:
commands.append("interface {0}".format(interface_name))
commands.extend(intf_commands)
return commands
@staticmethod
def _state_overridden(want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for key, extant in have.items():
if key in want:
desired = want[key]
else:
desired = dict()
if desired.get("ipv4"):
for ipv4 in desired["ipv4"]:
if ipv4["secondary"] is None:
del ipv4["secondary"]
intf_commands = set_interface(desired, extant)
intf_commands.extend(clear_interface(desired, extant))
if intf_commands:
commands.append("interface {0}".format(key))
commands.extend(intf_commands)
return commands
@staticmethod
def _state_merged(want, have):
""" The command generator when state is merged
:rtype: A list
:returns: the commands necessary to merge the provided into
the current configuration
"""
commands = []
for key, desired in want.items():
interface_name = normalize_interface(key)
if interface_name in have:
extant = have[interface_name]
else:
extant = dict()
intf_commands = set_interface(desired, extant)
if intf_commands:
commands.append("interface {0}".format(interface_name))
commands.extend(intf_commands)
return commands
@staticmethod
def _state_deleted(want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
commands = []
for key in want:
desired = dict()
if key in have:
extant = have[key]
else:
continue
intf_commands = clear_interface(desired, extant)
if intf_commands:
commands.append("interface {0}".format(key))
commands.extend(intf_commands)
return commands
def set_interface(want, have):
commands = []
want_ipv4 = set(tuple(address.items()) for address in want.get("ipv4") or [])
have_ipv4 = set(tuple(address.items()) for address in have.get("ipv4") or [])
for address in want_ipv4 - have_ipv4:
address = dict(address)
if "secondary" in address and not address["secondary"]:
del address["secondary"]
if tuple(address.items()) in have_ipv4:
continue
address_cmd = "ip address {0}".format(address["address"])
if address.get("secondary"):
address_cmd += " secondary"
commands.append(address_cmd)
want_ipv6 = set(tuple(address.items()) for address in want.get("ipv6") or [])
have_ipv6 = set(tuple(address.items()) for address in have.get("ipv6") or [])
for address in want_ipv6 - have_ipv6:
address = dict(address)
commands.append("ipv6 address {0}".format(address["address"]))
return commands
def clear_interface(want, have):
commands = []
want_ipv4 = set(tuple(address.items()) for address in want.get("ipv4") or [])
have_ipv4 = set(tuple(address.items()) for address in have.get("ipv4") or [])
if not want_ipv4:
commands.append("no ip address")
else:
for address in have_ipv4 - want_ipv4:
address = dict(address)
if "secondary" not in address:
address["secondary"] = False
if tuple(address.items()) in want_ipv4:
continue
address_cmd = "no ip address"
if address.get("secondary"):
address_cmd += " {0} secondary".format(address["address"])
commands.append(address_cmd)
if "secondary" not in address:
# Removing non-secondary removes all other interfaces
break
want_ipv6 = set(tuple(address.items()) for address in want.get("ipv6") or [])
have_ipv6 = set(tuple(address.items()) for address in have.get("ipv6") or [])
for address in have_ipv6 - want_ipv6:
address = dict(address)
commands.append("no ipv6 address {0}".format(address["address"]))
return commands
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,433 |
Template module shows all rows as difference.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Template module shows all rows as difference when `STRING_CONVERSION_ACTION=error`. It's seem that Template module lost `--- before` information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Template Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
$
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
STRING_CONVERSION_ACTION(/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg) = error
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
target OS versions: CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. prepare dummy.txt on target os.
```command
# echo hello > /root/dummy.txt
```
2. ansible-playbook
<!--- Paste example playbooks or commands between quotes below -->
```command
$ ansible-playbook -i hosts site.yml --diff --check
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [192.168.0.196]
TASK [template] ****************************************************************************************************************************************************************************************************
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-6716163hjzxdi/tmp9ta05f8n/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196]
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
```yaml
$ cat site.yml
- name: deploy by template module
hosts: all
tasks:
- template:
src: dummy.txt
dest: /root/dummy.txt
```
```dummy.txt
$ cat dummy.txt
hello
world
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Show only modified row.
```
TASK [template] ****************************************************************************************************************************************************************************************************
--- before: /root/dummy.txt
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-67303hkyg6c4n/tmpl13oaaoa/dummy.txt
@@ -1 +1,2 @@
hello
+world
changed: [192.168.0.196]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i hosts site.yml --diff --check -vvvv
ansible-playbook 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible-playbook
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
Using /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
script declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
auto declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
Parsed /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: site.yml *************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
check: True
diff: True
inventory: ('/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts',)
forks: 5
1 plays in site.yml
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:1
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" && echo ansible-tmp-1563877177.7595708-149449123721876="` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877177.7595708-149449123721876=/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> Attempting python interpreter discovery
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.0.196> (0, b'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python\nENDFOUND\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.0.196> (0, b'{"osrelease_content": "NAME=\\"CentOS Linux\\"\\nVERSION=\\"7 (Core)\\"\\nID=\\"centos\\"\\nID_LIKE=\\"rhel fedora\\"\\nVERSION_ID=\\"7\\"\\nPRETTY_NAME=\\"CentOS Linux 7 (Core)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:centos:centos:7\\"\\nHOME_URL=\\"https://www.centos.org/\\"\\nBUG_REPORT_URL=\\"https://bugs.centos.org/\\"\\n\\nCENTOS_MANTISBT_PROJECT=\\"CentOS-7\\"\\nCENTOS_MANTISBT_PROJECT_VERSION=\\"7\\"\\nREDHAT_SUPPORT_PRODUCT=\\"centos\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7\\"\\n\\n", "platform_dist_result": ["centos", "7.6.1810", "Core"]}\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/system/setup.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 TO /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:9 O:163840 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:10 O:196608 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:11 O:229376 S:23099\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 32768 bytes at 131072\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 9 32768 bytes at 163840\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 10 32768 bytes at 196608\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 11 23099 bytes at 229376\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.6", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "C", "LC_NUMERIC": "C", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "PWD": "/root", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "XDG_SESSION_ID": "27910", "SSH_CLIENT": "192.168.0.116 53653 22", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "HOME": "/root", "LC_ALL": "C", "_": "/usr/bin/python", "SSH_CONNECTION": "192.168.0.116 53653 192.168.0.196 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_vethcb884b7": {"macaddress": "8a:47:31:d5:d3:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethcb884b7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8847:31ff:fed5:d31e"}], "active": true, "speed": 10000}, "ansible_default_ipv4": {"macaddress": "1c:87:2c:41:6e:ce", "network": "192.168.0.0", "mtu": 1500, "broadcast": "192.168.0.255", "alias": "enp4s0", "netmask": "255.255.255.0", "address": "192.168.0.196", "interface": "enp4s0", "type": "ether", "gateway": "192.168.0.3"}, "ansible_swapfree_mb": 7654, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": "centos/swap", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_machine_id": "75e09accf1bb49fa8d70b2de021e00fb", "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "BD9ABFA8-FE30-2EE7-41D2-1C872C416ECE", "ansible_pkg_mgr": "yum", "ansible_vethc03bba1": {"macaddress": "1e:41:f1:a6:31:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethc03bba1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c41:f1ff:fea6:31ff"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "", "ansible_veth2726a80": {"macaddress": "4a:02:87:c6:94:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth2726a80", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4802:87ff:fec6:9414"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::1e87:2cff:fe41:6ece", "fe80::fcb2:74ff:feb8:ed7a", "fe80::1c41:f1ff:fea6:31ff", "fe80::a0e7:6fff:fea1:a0d4", "fe80::4802:87ff:fec6:9414", "fe80::42:20ff:fe23:46cf", "fe80::8847:31ff:fed5:d31e"], "ansible_uptime_seconds": 6684898, "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "System Serial Number", "ansible_form_factor": "Desktop", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:20:23:46:cf", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [requested on]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [requested on]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": ["veth2726a80", "vethc03bba1", "veth5af9b4f", "veth27f581f", "vethcb884b7"], "id": "8000.0242202346cf", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "172.17.255.255", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::42:20ff:fe23:46cf"}], "active": true, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "1", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "2", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "3", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "4", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "5", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "6", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "7", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAdVLDGWn3TP6UxDh2EOhbblOwKh9nc8rDSSYZ33sc9SQIPhmYsGGnP62cC5Fm4uVe14lBF0Thf8IZCMIYpuLY=", "ansible_user_gid": 0, "ansible_system_vendor": "ASUS", "ansible_swaptotal_mb": 7935, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDyDmjz8qJ/JvPuvlZi3qiIT1vBBciJiargJHLs8ccywFNcVbJiXj/3fibFoE2VISKLtcYtPvxAzMnKeowdPc5BmmTmdKyyvSMTxmbX25lhb9t0LhkFeIUXbhy+j9Wvj6/d39Yuh2zUbIqI5YR/qpssEUeh2z/eROm/jN0lj1TSnhcYxDAe04GvXGBfDCCz1lDW/rX1/JgBIdRYGUyB57BbeS3FlvFxz7NfzBEdAdr+Dvv/oxTd4aoteqx1+Z8pNVKYkDw1nbjMFcZDF9u/uANvwh3p0qw4Nfve5Sit/zkDdkdC+DkpnnR5W+M2O1o7Iyq90AafS4xCqzYG6MDR+Jv/", "ansible_user_gecos": "root", "ansible_processor_threads_per_core": 2, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["192.168.0.196", "172.17.0.1"], "ansible_python_version": "2.7.5", "ansible_product_version": "System Version", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7690, "used": 7491, "free": 199}, "swap": {"cached": 16, "total": 7935, "free": 7654, "used": 281}, "nocache": {"used": 6807, "free": 883}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "host", "ansible_dns": {"nameservers": ["192.168.0.5", "192.168.0.16"], "search": ["work"]}, "ansible_effective_group_id": 0, "ansible_enp4s0": {"macaddress": "1c:87:2c:41:6e:ce", "features": {"tx_checksum_ipv4": "off", "generic_receive_offload": "on", "tx_checksum_ipv6": "off", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off", "highdma": "on [fixed]", "rx_fcs": "off", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "off", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "off", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "off", "rx_checksumming": "on", "tx_tcp_segmentation": "off", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "off [requested on]", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:04:00.0", "module": "r8169", "mtu": 1500, "device": "enp4s0", "promisc": false, "timestamping": ["tx_software", "rx_software", "software"], "ipv4": {"broadcast": "192.168.0.255", "netmask": "255.255.255.0", "network": "192.168.0.0", "address": "192.168.0.196"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1e87:2cff:fe41:6ece"}], "active": true, "speed": 1000, "hw_timestamp_filters": []}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7690, "ansible_device_links": {"masters": {"loop1": ["dm-3"], "loop0": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2"], "dm-3": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"]}, "labels": {}, "ids": {"sr0": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "sda2": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "sda": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "dm-8": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "sda1": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "dm-6": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "dm-7": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "dm-4": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "dm-5": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "dm-2": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "dm-0": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "dm-1": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"]}, "uuids": {"sda1": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"], "dm-2": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"], "dm-0": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"], "dm-8": ["48eda381-df74-4ad8-a63a-46c167bf1144"], "dm-1": ["13900741-5a75-44f6-8848-3325135493d0"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": ["centos/root", "centos/swap"], "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_memfree_mb": 199, "ansible_processor_count": 1, "ansible_hostname": "intra", "ansible_interfaces": ["veth2726a80", "vethcb884b7", "docker0", "lo", "enp4s0", "vethc03bba1", "veth5af9b4f", "veth27f581f"], "ansible_selinux": {"status": "disabled"}, "ansible_fqdn": "ec2-52-213-25-113.eu-west-1.compute.amazonaws.com", "ansible_mounts": [{"block_used": 8256, "uuid": "80fe1d0c-c3c4-4442-a467-f2975fd87ba5", "size_total": 437290033152, "block_total": 106760262, "mount": "/home", "block_available": 106752006, "size_available": 437256216576, "fstype": "xfs", "inode_total": 427249664, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-home", "inode_used": 3, "block_size": 4096, "inode_available": 427249661}, {"block_used": 74602, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "size_total": 517713920, "block_total": 126395, "mount": "/boot", "block_available": 51793, "size_available": 212144128, "fstype": "xfs", "inode_total": 512000, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 361, "block_size": 4096, "inode_available": 511639}, {"block_used": 10291, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "block_available": 2608333, "size_available": 10683731968, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "inode_used": 519, "block_size": 4096, "inode_available": 10484217}, {"block_used": 7070823, "uuid": "ac012a2a-a7f8-425b-911a-9197e611fbfe", "size_total": 53660876800, "block_total": 13100800, "mount": "/", "block_available": 6029977, "size_available": 24698785792, "fstype": "xfs", "inode_total": 52428800, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-root", "inode_used": 146375, "block_size": 4096, "inode_available": 52282425}, {"block_used": 79838, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "block_available": 2538786, "size_available": 10398867456, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "inode_used": 14836, "block_size": 4096, "inode_available": 10469900}, {"block_used": 334234, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "block_available": 2284390, "size_available": 9356861440, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "inode_used": 79705, "block_size": 4096, "inode_available": 10405031}, {"block_used": 42443, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "block_available": 2576181, "size_available": 10552037376, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "inode_used": 7156, "block_size": 4096, "inode_available": 10477580}, {"block_used": 322515, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "block_available": 2296109, "size_available": 9404862464, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "inode_used": 79368, "block_size": 4096, "inode_available": 10405368}], "ansible_nodename": "intra.work", "ansible_lvm": {"pvs": {"/dev/sda2": {"free_g": "0.06", "size_g": "465.27", "vg": "centos"}}, "lvs": {"home": {"size_g": "407.46", "vg": "centos"}, "root": {"size_g": "50.00", "vg": "centos"}, "swap": {"size_g": "7.75", "vg": "centos"}}, "vgs": {"centos": {"free_g": "0.06", "size_g": "465.27", "num_lvs": "3", "num_pvs": "1"}}}, "ansible_domain": "eu-west-1.compute.amazonaws.com", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIEzv1iG3Mak/xFq6KbljB8M4YaTfHo/ZiskvcC9Kz7kV", "ansible_processor_cores": 4, "ansible_bios_version": "2201", "ansible_date_time": {"weekday_number": "2", "iso8601_basic_short": "20190723T191938", "tz": "JST", "weeknumber": "29", "hour": "19", "year": "2019", "minute": "19", "tz_offset": "+0900", "month": "07", "epoch": "1563877178", "iso8601_micro": "2019-07-23T10:19:38.711920Z", "weekday": "\\u706b\\u66dc\\u65e5", "time": "19:19:38", "date": "2019-07-23", "iso8601": "2019-07-23T10:19:38Z", "day": "23", "iso8601_basic": "20190723T191938711851", "second": "38"}, "ansible_distribution_release": "Core", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth27f581f": {"macaddress": "fe:b2:74:b8:ed:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth27f581f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcb2:74ff:feb8:ed7a"}], "active": true, "speed": 10000}, "ansible_product_name": "All Series", "ansible_devices": {"dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "size": "100.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ASUS", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "DRW-24F1ST a", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ATA", "sectors": "976773168", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "uuids": []}, "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2"], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "uuids": []}, "sectors": "975747072", "start": "1026048", "holders": ["centos-root", "centos-swap", "centos-home"], "size": "465.27 GB"}, "sda1": {"sectorsize": 512, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "uuids": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "TOSHIBA DT01ABA0", "wwn": "0x5000039fe0c30158", "holders": [], "size": "465.76 GB"}, "dm-8": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "uuids": ["48eda381-df74-4ad8-a63a-46c167bf1144"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-6": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-7": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "loop1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "4194304", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "2.00 GB"}, "loop0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "100.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "854499328", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "uuids": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "407.46 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "uuids": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "50.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "16252928", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"], "uuids": ["13900741-5a75-44f6-8848-3325135493d0"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "7.75 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "11/26/2014", "ansible_distribution": "CentOS", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth5af9b4f": {"macaddress": "a2:e7:6f:a1:a0:d4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth5af9b4f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a0e7:6fff:fea1:a0d4"}], "active": true, "speed": 10000}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [192.168.0.196]
META: ran handlers
TASK [template] ****************************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:4
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" && echo ansible-tmp-1563877178.840123-58444523661802="` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877178.840123-58444523661802=/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/stat.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8068\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8068 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/root/dummy.txt", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": true, "device_type": 0, "mtime": 1563871385.1164613, "block_size": 4096, "inode": 201591273, "isgid": false, "size": 6, "executable": false, "isuid": false, "readable": true, "version": "889946615", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/root/dummy.txt", "xusr": false, "atime": 1563871388.064355, "isdir": false, "ctime": 1563871385.1404603, "isblk": false, "wgrp": false, "checksum": "9591818c07e900db7e1e0bc4b884c945e6a61b24", "dev": 64768, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": []}, "changed": false}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/file.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:12803\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 12803 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (1, b'\r\n{"msg": "argument _diff_peek is of type <type \'bool\'> and we were unable to convert to str: Quote the entire value to ensure it does not change.", "failed": true, "exception": "WARNING: The below traceback may *not* be related to the actual failure.\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1780, in _check_argument_types\\n param[k] = type_checker(value)\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1631, in _check_type_str\\n raise TypeError(to_native(msg))\\n", "invocation": {"module_args": {"force": false, "recurse": false, "access_time_format": "%Y%m%d%H%M.%S", "_diff_peek": true, "modification_time_format": "%Y%m%d%H%M.%S", "path": "/root/dummy.txt", "follow": true}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> Failed to connect to the host via ssh: OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/mhimuro/.ssh/config
debug1: /Users/mhimuro/.ssh/config line 25: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug2: resolve_canonicalize: hostname 192.168.0.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 66322
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.0.196 closed.
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196] => {
"changed": true,
"diff": [
{
"after": "hello\nworld\n",
"after_header": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt",
"before": ""
}
],
"invocation": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"module_args": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
},
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
|
https://github.com/ansible/ansible/issues/59433
|
https://github.com/ansible/ansible/pull/60428
|
9a51dff0b17f01bcb280a438ecfe785e5fda4541
|
9b7198d25ecf084b6a465ba445efd426022265c3
| 2019-07-23T10:39:00Z |
python
| 2020-01-17T21:02:28Z |
changelogs/fragments/file-fix-diff-peek-arg-spec.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,433 |
Template module shows all rows as difference.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Template module shows all rows as difference when `STRING_CONVERSION_ACTION=error`. It's seem that Template module lost `--- before` information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Template Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
$
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
STRING_CONVERSION_ACTION(/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg) = error
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
target OS versions: CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. prepare dummy.txt on target os.
```command
# echo hello > /root/dummy.txt
```
2. ansible-playbook
<!--- Paste example playbooks or commands between quotes below -->
```command
$ ansible-playbook -i hosts site.yml --diff --check
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [192.168.0.196]
TASK [template] ****************************************************************************************************************************************************************************************************
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-6716163hjzxdi/tmp9ta05f8n/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196]
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
```yaml
$ cat site.yml
- name: deploy by template module
hosts: all
tasks:
- template:
src: dummy.txt
dest: /root/dummy.txt
```
```dummy.txt
$ cat dummy.txt
hello
world
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Show only modified row.
```
TASK [template] ****************************************************************************************************************************************************************************************************
--- before: /root/dummy.txt
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-67303hkyg6c4n/tmpl13oaaoa/dummy.txt
@@ -1 +1,2 @@
hello
+world
changed: [192.168.0.196]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i hosts site.yml --diff --check -vvvv
ansible-playbook 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible-playbook
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
Using /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
script declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
auto declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
Parsed /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: site.yml *************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
check: True
diff: True
inventory: ('/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts',)
forks: 5
1 plays in site.yml
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:1
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" && echo ansible-tmp-1563877177.7595708-149449123721876="` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877177.7595708-149449123721876=/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> Attempting python interpreter discovery
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.0.196> (0, b'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python\nENDFOUND\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.0.196> (0, b'{"osrelease_content": "NAME=\\"CentOS Linux\\"\\nVERSION=\\"7 (Core)\\"\\nID=\\"centos\\"\\nID_LIKE=\\"rhel fedora\\"\\nVERSION_ID=\\"7\\"\\nPRETTY_NAME=\\"CentOS Linux 7 (Core)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:centos:centos:7\\"\\nHOME_URL=\\"https://www.centos.org/\\"\\nBUG_REPORT_URL=\\"https://bugs.centos.org/\\"\\n\\nCENTOS_MANTISBT_PROJECT=\\"CentOS-7\\"\\nCENTOS_MANTISBT_PROJECT_VERSION=\\"7\\"\\nREDHAT_SUPPORT_PRODUCT=\\"centos\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7\\"\\n\\n", "platform_dist_result": ["centos", "7.6.1810", "Core"]}\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/system/setup.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 TO /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:9 O:163840 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:10 O:196608 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:11 O:229376 S:23099\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 32768 bytes at 131072\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 9 32768 bytes at 163840\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 10 32768 bytes at 196608\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 11 23099 bytes at 229376\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.6", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "C", "LC_NUMERIC": "C", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "PWD": "/root", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "XDG_SESSION_ID": "27910", "SSH_CLIENT": "192.168.0.116 53653 22", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "HOME": "/root", "LC_ALL": "C", "_": "/usr/bin/python", "SSH_CONNECTION": "192.168.0.116 53653 192.168.0.196 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_vethcb884b7": {"macaddress": "8a:47:31:d5:d3:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethcb884b7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8847:31ff:fed5:d31e"}], "active": true, "speed": 10000}, "ansible_default_ipv4": {"macaddress": "1c:87:2c:41:6e:ce", "network": "192.168.0.0", "mtu": 1500, "broadcast": "192.168.0.255", "alias": "enp4s0", "netmask": "255.255.255.0", "address": "192.168.0.196", "interface": "enp4s0", "type": "ether", "gateway": "192.168.0.3"}, "ansible_swapfree_mb": 7654, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": "centos/swap", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_machine_id": "75e09accf1bb49fa8d70b2de021e00fb", "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "BD9ABFA8-FE30-2EE7-41D2-1C872C416ECE", "ansible_pkg_mgr": "yum", "ansible_vethc03bba1": {"macaddress": "1e:41:f1:a6:31:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethc03bba1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c41:f1ff:fea6:31ff"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "", "ansible_veth2726a80": {"macaddress": "4a:02:87:c6:94:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth2726a80", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4802:87ff:fec6:9414"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::1e87:2cff:fe41:6ece", "fe80::fcb2:74ff:feb8:ed7a", "fe80::1c41:f1ff:fea6:31ff", "fe80::a0e7:6fff:fea1:a0d4", "fe80::4802:87ff:fec6:9414", "fe80::42:20ff:fe23:46cf", "fe80::8847:31ff:fed5:d31e"], "ansible_uptime_seconds": 6684898, "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "System Serial Number", "ansible_form_factor": "Desktop", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:20:23:46:cf", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [requested on]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [requested on]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": ["veth2726a80", "vethc03bba1", "veth5af9b4f", "veth27f581f", "vethcb884b7"], "id": "8000.0242202346cf", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "172.17.255.255", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::42:20ff:fe23:46cf"}], "active": true, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "1", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "2", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "3", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "4", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "5", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "6", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "7", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAdVLDGWn3TP6UxDh2EOhbblOwKh9nc8rDSSYZ33sc9SQIPhmYsGGnP62cC5Fm4uVe14lBF0Thf8IZCMIYpuLY=", "ansible_user_gid": 0, "ansible_system_vendor": "ASUS", "ansible_swaptotal_mb": 7935, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDyDmjz8qJ/JvPuvlZi3qiIT1vBBciJiargJHLs8ccywFNcVbJiXj/3fibFoE2VISKLtcYtPvxAzMnKeowdPc5BmmTmdKyyvSMTxmbX25lhb9t0LhkFeIUXbhy+j9Wvj6/d39Yuh2zUbIqI5YR/qpssEUeh2z/eROm/jN0lj1TSnhcYxDAe04GvXGBfDCCz1lDW/rX1/JgBIdRYGUyB57BbeS3FlvFxz7NfzBEdAdr+Dvv/oxTd4aoteqx1+Z8pNVKYkDw1nbjMFcZDF9u/uANvwh3p0qw4Nfve5Sit/zkDdkdC+DkpnnR5W+M2O1o7Iyq90AafS4xCqzYG6MDR+Jv/", "ansible_user_gecos": "root", "ansible_processor_threads_per_core": 2, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["192.168.0.196", "172.17.0.1"], "ansible_python_version": "2.7.5", "ansible_product_version": "System Version", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7690, "used": 7491, "free": 199}, "swap": {"cached": 16, "total": 7935, "free": 7654, "used": 281}, "nocache": {"used": 6807, "free": 883}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "host", "ansible_dns": {"nameservers": ["192.168.0.5", "192.168.0.16"], "search": ["work"]}, "ansible_effective_group_id": 0, "ansible_enp4s0": {"macaddress": "1c:87:2c:41:6e:ce", "features": {"tx_checksum_ipv4": "off", "generic_receive_offload": "on", "tx_checksum_ipv6": "off", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off", "highdma": "on [fixed]", "rx_fcs": "off", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "off", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "off", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "off", "rx_checksumming": "on", "tx_tcp_segmentation": "off", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "off [requested on]", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:04:00.0", "module": "r8169", "mtu": 1500, "device": "enp4s0", "promisc": false, "timestamping": ["tx_software", "rx_software", "software"], "ipv4": {"broadcast": "192.168.0.255", "netmask": "255.255.255.0", "network": "192.168.0.0", "address": "192.168.0.196"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1e87:2cff:fe41:6ece"}], "active": true, "speed": 1000, "hw_timestamp_filters": []}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7690, "ansible_device_links": {"masters": {"loop1": ["dm-3"], "loop0": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2"], "dm-3": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"]}, "labels": {}, "ids": {"sr0": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "sda2": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "sda": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "dm-8": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "sda1": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "dm-6": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "dm-7": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "dm-4": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "dm-5": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "dm-2": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "dm-0": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "dm-1": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"]}, "uuids": {"sda1": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"], "dm-2": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"], "dm-0": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"], "dm-8": ["48eda381-df74-4ad8-a63a-46c167bf1144"], "dm-1": ["13900741-5a75-44f6-8848-3325135493d0"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": ["centos/root", "centos/swap"], "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_memfree_mb": 199, "ansible_processor_count": 1, "ansible_hostname": "intra", "ansible_interfaces": ["veth2726a80", "vethcb884b7", "docker0", "lo", "enp4s0", "vethc03bba1", "veth5af9b4f", "veth27f581f"], "ansible_selinux": {"status": "disabled"}, "ansible_fqdn": "ec2-52-213-25-113.eu-west-1.compute.amazonaws.com", "ansible_mounts": [{"block_used": 8256, "uuid": "80fe1d0c-c3c4-4442-a467-f2975fd87ba5", "size_total": 437290033152, "block_total": 106760262, "mount": "/home", "block_available": 106752006, "size_available": 437256216576, "fstype": "xfs", "inode_total": 427249664, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-home", "inode_used": 3, "block_size": 4096, "inode_available": 427249661}, {"block_used": 74602, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "size_total": 517713920, "block_total": 126395, "mount": "/boot", "block_available": 51793, "size_available": 212144128, "fstype": "xfs", "inode_total": 512000, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 361, "block_size": 4096, "inode_available": 511639}, {"block_used": 10291, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "block_available": 2608333, "size_available": 10683731968, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "inode_used": 519, "block_size": 4096, "inode_available": 10484217}, {"block_used": 7070823, "uuid": "ac012a2a-a7f8-425b-911a-9197e611fbfe", "size_total": 53660876800, "block_total": 13100800, "mount": "/", "block_available": 6029977, "size_available": 24698785792, "fstype": "xfs", "inode_total": 52428800, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-root", "inode_used": 146375, "block_size": 4096, "inode_available": 52282425}, {"block_used": 79838, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "block_available": 2538786, "size_available": 10398867456, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "inode_used": 14836, "block_size": 4096, "inode_available": 10469900}, {"block_used": 334234, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "block_available": 2284390, "size_available": 9356861440, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "inode_used": 79705, "block_size": 4096, "inode_available": 10405031}, {"block_used": 42443, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "block_available": 2576181, "size_available": 10552037376, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "inode_used": 7156, "block_size": 4096, "inode_available": 10477580}, {"block_used": 322515, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "block_available": 2296109, "size_available": 9404862464, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "inode_used": 79368, "block_size": 4096, "inode_available": 10405368}], "ansible_nodename": "intra.work", "ansible_lvm": {"pvs": {"/dev/sda2": {"free_g": "0.06", "size_g": "465.27", "vg": "centos"}}, "lvs": {"home": {"size_g": "407.46", "vg": "centos"}, "root": {"size_g": "50.00", "vg": "centos"}, "swap": {"size_g": "7.75", "vg": "centos"}}, "vgs": {"centos": {"free_g": "0.06", "size_g": "465.27", "num_lvs": "3", "num_pvs": "1"}}}, "ansible_domain": "eu-west-1.compute.amazonaws.com", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIEzv1iG3Mak/xFq6KbljB8M4YaTfHo/ZiskvcC9Kz7kV", "ansible_processor_cores": 4, "ansible_bios_version": "2201", "ansible_date_time": {"weekday_number": "2", "iso8601_basic_short": "20190723T191938", "tz": "JST", "weeknumber": "29", "hour": "19", "year": "2019", "minute": "19", "tz_offset": "+0900", "month": "07", "epoch": "1563877178", "iso8601_micro": "2019-07-23T10:19:38.711920Z", "weekday": "\\u706b\\u66dc\\u65e5", "time": "19:19:38", "date": "2019-07-23", "iso8601": "2019-07-23T10:19:38Z", "day": "23", "iso8601_basic": "20190723T191938711851", "second": "38"}, "ansible_distribution_release": "Core", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth27f581f": {"macaddress": "fe:b2:74:b8:ed:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth27f581f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcb2:74ff:feb8:ed7a"}], "active": true, "speed": 10000}, "ansible_product_name": "All Series", "ansible_devices": {"dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "size": "100.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ASUS", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "DRW-24F1ST a", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ATA", "sectors": "976773168", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "uuids": []}, "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2"], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "uuids": []}, "sectors": "975747072", "start": "1026048", "holders": ["centos-root", "centos-swap", "centos-home"], "size": "465.27 GB"}, "sda1": {"sectorsize": 512, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "uuids": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "TOSHIBA DT01ABA0", "wwn": "0x5000039fe0c30158", "holders": [], "size": "465.76 GB"}, "dm-8": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "uuids": ["48eda381-df74-4ad8-a63a-46c167bf1144"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-6": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-7": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "loop1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "4194304", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "2.00 GB"}, "loop0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "100.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "854499328", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "uuids": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "407.46 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "uuids": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "50.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "16252928", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"], "uuids": ["13900741-5a75-44f6-8848-3325135493d0"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "7.75 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "11/26/2014", "ansible_distribution": "CentOS", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth5af9b4f": {"macaddress": "a2:e7:6f:a1:a0:d4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth5af9b4f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a0e7:6fff:fea1:a0d4"}], "active": true, "speed": 10000}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [192.168.0.196]
META: ran handlers
TASK [template] ****************************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:4
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" && echo ansible-tmp-1563877178.840123-58444523661802="` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877178.840123-58444523661802=/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/stat.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8068\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8068 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/root/dummy.txt", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": true, "device_type": 0, "mtime": 1563871385.1164613, "block_size": 4096, "inode": 201591273, "isgid": false, "size": 6, "executable": false, "isuid": false, "readable": true, "version": "889946615", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/root/dummy.txt", "xusr": false, "atime": 1563871388.064355, "isdir": false, "ctime": 1563871385.1404603, "isblk": false, "wgrp": false, "checksum": "9591818c07e900db7e1e0bc4b884c945e6a61b24", "dev": 64768, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": []}, "changed": false}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/file.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:12803\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 12803 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (1, b'\r\n{"msg": "argument _diff_peek is of type <type \'bool\'> and we were unable to convert to str: Quote the entire value to ensure it does not change.", "failed": true, "exception": "WARNING: The below traceback may *not* be related to the actual failure.\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1780, in _check_argument_types\\n param[k] = type_checker(value)\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1631, in _check_type_str\\n raise TypeError(to_native(msg))\\n", "invocation": {"module_args": {"force": false, "recurse": false, "access_time_format": "%Y%m%d%H%M.%S", "_diff_peek": true, "modification_time_format": "%Y%m%d%H%M.%S", "path": "/root/dummy.txt", "follow": true}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> Failed to connect to the host via ssh: OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/mhimuro/.ssh/config
debug1: /Users/mhimuro/.ssh/config line 25: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug2: resolve_canonicalize: hostname 192.168.0.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 66322
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.0.196 closed.
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196] => {
"changed": true,
"diff": [
{
"after": "hello\nworld\n",
"after_header": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt",
"before": ""
}
],
"invocation": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"module_args": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
},
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
|
https://github.com/ansible/ansible/issues/59433
|
https://github.com/ansible/ansible/pull/60428
|
9a51dff0b17f01bcb280a438ecfe785e5fda4541
|
9b7198d25ecf084b6a465ba445efd426022265c3
| 2019-07-23T10:39:00Z |
python
| 2020-01-17T21:02:28Z |
lib/ansible/modules/files/file.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: file
version_added: historical
short_description: Manage files and file properties
extends_documentation_fragment: files
description:
- Set attributes of files, symlinks or directories.
- Alternatively, remove files, symlinks or directories.
- Many other modules support the same options as the C(file) module - including M(copy), M(template), and M(assemble).
- For Windows targets, use the M(win_file) module instead.
options:
path:
description:
- Path to the file being managed.
type: path
required: yes
aliases: [ dest, name ]
state:
description:
- If C(absent), directories will be recursively deleted, and files or symlinks will
be unlinked. In the case of a directory, if C(diff) is declared, you will see the files and folders deleted listed
under C(path_contents). Note that C(absent) will not cause C(file) to fail if the C(path) does
not exist as the state did not change.
- If C(directory), all intermediate subdirectories will be created if they
do not exist. Since Ansible 1.7 they will be created with the supplied permissions.
- If C(file), without any other options this works mostly as a 'stat' and will return the current state of C(path).
Even with other options (i.e C(mode)), the file will be modified but will NOT be created if it does not exist;
see the C(touch) value or the M(copy) or M(template) module if you want that behavior.
- If C(hard), the hard link will be created or changed.
- If C(link), the symbolic link will be created or changed.
- If C(touch) (new in 1.4), an empty file will be created if the C(path) does not
exist, while an existing file or directory will receive updated file access and
modification times (similar to the way C(touch) works from the command line).
type: str
default: file
choices: [ absent, directory, file, hard, link, touch ]
src:
description:
- Path of the file to link to.
- This applies only to C(state=link) and C(state=hard).
- For C(state=link), this will also accept a non-existing path.
- Relative paths are relative to the file being created (C(path)) which is how
the Unix command C(ln -s SRC DEST) treats relative paths.
type: path
recurse:
description:
- Recursively set the specified file attributes on directory contents.
- This applies only when C(state) is set to C(directory).
type: bool
default: no
version_added: '1.1'
force:
description:
- >
Force the creation of the symlinks in two cases: the source file does
not exist (but will appear later); the destination exists and is a file (so, we need to unlink the
C(path) file and create symlink to the C(src) file in place of it).
type: bool
default: no
follow:
description:
- This flag indicates that filesystem links, if they exist, should be followed.
- Previous to Ansible 2.5, this was C(no) by default.
type: bool
default: yes
version_added: '1.8'
modification_time:
description:
- This parameter indicates the time the file's modification time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is None meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: "2.7"
modification_time_format:
description:
- When used with C(modification_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
access_time:
description:
- This parameter indicates the time the file's access time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is C(None) meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: '2.7'
access_time_format:
description:
- When used with C(access_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
seealso:
- module: assemble
- module: copy
- module: stat
- module: template
- module: win_file
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Change file ownership, group and permissions
file:
path: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Give insecure permissions to an existing file
file:
path: /work
owner: root
group: root
mode: '1777'
- name: Create a symbolic link
file:
src: /file/to/link/to
dest: /path/to/symlink
owner: foo
group: foo
state: link
- name: Create two hard links
file:
src: '/tmp/{{ item.src }}'
dest: '{{ item.dest }}'
state: hard
loop:
- { src: x, dest: y }
- { src: z, dest: k }
- name: Touch a file, using symbolic modes to set the permissions (equivalent to 0644)
file:
path: /etc/foo.conf
state: touch
mode: u=rw,g=r,o=r
- name: Touch the same file, but add/remove some permissions
file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
- name: Touch again the same file, but dont change times this makes the task idempotent
file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
modification_time: preserve
access_time: preserve
- name: Create a directory if it does not exist
file:
path: /etc/some_directory
state: directory
mode: '0755'
- name: Update modification and access time of given file
file:
path: /etc/some_file
state: file
modification_time: now
access_time: now
- name: Set access time based on seconds from epoch value
file:
path: /etc/another_file
state: file
access_time: '{{ "%Y%m%d%H%M.%S" | strftime(stat_var.stat.atime) }}'
- name: Recursively change ownership of a directory
file:
path: /etc/foo
state: directory
recurse: yes
owner: foo
group: foo
- name: Remove file (delete file)
file:
path: /etc/foo.txt
state: absent
- name: Recursively remove directory
file:
path: /etc/foo
state: absent
'''
RETURN = r'''
'''
import errno
import os
import shutil
import sys
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
# There will only be a single AnsibleModule object per module
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
def __repr__(self):
print('AnsibleModuleError(results={0})'.format(self.results))
class ParameterError(AnsibleModuleError):
pass
class Sentinel(object):
def __new__(cls, *args, **kwargs):
return cls
def _ansible_excepthook(exc_type, exc_value, tb):
# Using an exception allows us to catch it if the calling code knows it can recover
if issubclass(exc_type, AnsibleModuleError):
module.fail_json(**exc_value.results)
else:
sys.__excepthook__(exc_type, exc_value, tb)
def additional_parameter_handling(params):
"""Additional parameter validation and reformatting"""
# When path is a directory, rewrite the pathname to be the file inside of the directory
# TODO: Why do we exclude link? Why don't we exclude directory? Should we exclude touch?
# I think this is where we want to be in the future:
# when isdir(path):
# if state == absent: Remove the directory
# if state == touch: Touch the directory
# if state == directory: Assert the directory is the same as the one specified
# if state == file: place inside of the directory (use _original_basename)
# if state == link: place inside of the directory (use _original_basename. Fallback to src?)
# if state == hard: place inside of the directory (use _original_basename. Fallback to src?)
if (params['state'] not in ("link", "absent") and os.path.isdir(to_bytes(params['path'], errors='surrogate_or_strict'))):
basename = None
if params['_original_basename']:
basename = params['_original_basename']
elif params['src']:
basename = os.path.basename(params['src'])
if basename:
params['path'] = os.path.join(params['path'], basename)
# state should default to file, but since that creates many conflicts,
# default state to 'current' when it exists.
prev_state = get_state(to_bytes(params['path'], errors='surrogate_or_strict'))
if params['state'] is None:
if prev_state != 'absent':
params['state'] = prev_state
elif params['recurse']:
params['state'] = 'directory'
else:
params['state'] = 'file'
# make sure the target path is a directory when we're doing a recursive operation
if params['recurse'] and params['state'] != 'directory':
raise ParameterError(results={"msg": "recurse option requires state to be 'directory'",
"path": params["path"]})
# Make sure that src makes sense with the state
if params['src'] and params['state'] not in ('link', 'hard'):
params['src'] = None
module.warn("The src option requires state to be 'link' or 'hard'. This will become an"
" error in Ansible 2.10")
# In 2.10, switch to this
# raise ParameterError(results={"msg": "src option requires state to be 'link' or 'hard'",
# "path": params["path"]})
def get_state(path):
''' Find out current state '''
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
if os.path.lexists(b_path):
if os.path.islink(b_path):
return 'link'
elif os.path.isdir(b_path):
return 'directory'
elif os.stat(b_path).st_nlink > 1:
return 'hard'
# could be many other things, but defaulting to file
return 'file'
return 'absent'
except OSError as e:
if e.errno == errno.ENOENT: # It may already have been removed
return 'absent'
else:
raise
# This should be moved into the common file utilities
def recursive_set_attributes(b_path, follow, file_args, mtime, atime):
changed = False
try:
for b_root, b_dirs, b_files in os.walk(b_path):
for b_fsobj in b_dirs + b_files:
b_fsname = os.path.join(b_root, b_fsobj)
if not os.path.islink(b_fsname):
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
else:
# Change perms on the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
if follow:
b_fsname = os.path.join(b_root, os.readlink(b_fsname))
# The link target could be nonexistent
if os.path.exists(b_fsname):
if os.path.isdir(b_fsname):
# Link is a directory so change perms on the directory's contents
changed |= recursive_set_attributes(b_fsname, follow, file_args, mtime, atime)
# Change perms on the file pointed to by the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
except RuntimeError as e:
# on Python3 "RecursionError" is raised which is derived from "RuntimeError"
# TODO once this function is moved into the common file utilities, this should probably raise more general exception
raise AnsibleModuleError(
results={'msg': "Could not recursively set attributes on %s. Original error was: '%s'" % (to_native(b_path), to_native(e))}
)
return changed
def initial_diff(path, state, prev_state):
diff = {'before': {'path': path},
'after': {'path': path},
}
if prev_state != state:
diff['before']['state'] = prev_state
diff['after']['state'] = state
if state == 'absent' and prev_state == 'directory':
walklist = {
'directories': [],
'files': [],
}
b_path = to_bytes(path, errors='surrogate_or_strict')
for base_path, sub_folders, files in os.walk(b_path):
for folder in sub_folders:
folderpath = os.path.join(base_path, folder)
walklist['directories'].append(folderpath)
for filename in files:
filepath = os.path.join(base_path, filename)
walklist['files'].append(filepath)
diff['before']['path_content'] = walklist
return diff
#
# States
#
def get_timestamp_for_time(formatted_time, time_format):
if formatted_time == 'preserve':
return None
elif formatted_time == 'now':
return Sentinel
else:
try:
struct = time.strptime(formatted_time, time_format)
struct_time = time.mktime(struct)
except (ValueError, OverflowError) as e:
raise AnsibleModuleError(results={'msg': 'Error while obtaining timestamp for time %s using format %s: %s'
% (formatted_time, time_format, to_native(e, nonstring='simplerepr'))})
return struct_time
def update_timestamp_for_file(path, mtime, atime, diff=None):
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
# When mtime and atime are set to 'now', rely on utime(path, None) which does not require ownership of the file
# https://github.com/ansible/ansible/issues/50943
if mtime is Sentinel and atime is Sentinel:
# It's not exact but we can't rely on os.stat(path).st_mtime after setting os.utime(path, None) as it may
# not be updated. Just use the current time for the diff values
mtime = atime = time.time()
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
set_time = None
else:
# If both parameters are None 'preserve', nothing to do
if mtime is None and atime is None:
return False
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
if mtime is None:
mtime = previous_mtime
elif mtime is Sentinel:
mtime = time.time()
if atime is None:
atime = previous_atime
elif atime is Sentinel:
atime = time.time()
# If both timestamps are already ok, nothing to do
if mtime == previous_mtime and atime == previous_atime:
return False
set_time = (atime, mtime)
os.utime(b_path, set_time)
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
if 'after' not in diff:
diff['after'] = {}
if mtime != previous_mtime:
diff['before']['mtime'] = previous_mtime
diff['after']['mtime'] = mtime
if atime != previous_atime:
diff['before']['atime'] = previous_atime
diff['after']['atime'] = atime
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while updating modification or access time: %s'
% to_native(e, nonstring='simplerepr'), 'path': path})
return True
def keep_backward_compatibility_on_timestamps(parameter, state):
if state in ['file', 'hard', 'directory', 'link'] and parameter is None:
return 'preserve'
elif state == 'touch' and parameter is None:
return 'now'
else:
return parameter
def execute_diff_peek(path):
"""Take a guess as to whether a file is a binary file"""
b_path = to_bytes(path, errors='surrogate_or_strict')
appears_binary = False
try:
with open(b_path, 'rb') as f:
head = f.read(8192)
except Exception:
# If we can't read the file, we're okay assuming it's text
pass
else:
if b"\x00" in head:
appears_binary = True
return appears_binary
def ensure_absent(path):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
result = {}
if prev_state != 'absent':
diff = initial_diff(path, 'absent', prev_state)
if not module.check_mode:
if prev_state == 'directory':
try:
shutil.rmtree(b_path, ignore_errors=False)
except Exception as e:
raise AnsibleModuleError(results={'msg': "rmtree failed: %s" % to_native(e)})
else:
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise AnsibleModuleError(results={'msg': "unlinking failed: %s " % to_native(e),
'path': path})
result.update({'path': path, 'changed': True, 'diff': diff, 'state': 'absent'})
else:
result.update({'path': path, 'changed': False, 'state': 'absent'})
return result
def execute_touch(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
changed = False
result = {'dest': path}
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if not module.check_mode:
if prev_state == 'absent':
# Create an empty file if the filename did not already exist
try:
open(b_path, 'wb').close()
changed = True
except (OSError, IOError) as e:
raise AnsibleModuleError(results={'msg': 'Error, could not touch target: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
# Update the attributes on the file
diff = initial_diff(path, 'touch', prev_state)
file_args = module.load_file_common_arguments(module.params)
try:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except SystemExit as e:
if e.code:
# We take this to mean that fail_json() was called from
# somewhere in basic.py
if prev_state == 'absent':
# If we just created the file we can safely remove it
os.remove(b_path)
raise
result['changed'] = changed
result['diff'] = diff
return result
def ensure_file_attributes(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if prev_state != 'file':
if follow and prev_state == 'link':
# follow symlink and operate on original
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
prev_state = get_state(b_path)
file_args['path'] = path
if prev_state not in ('file', 'hard'):
# file is not absent and any other state is a conflict
raise AnsibleModuleError(results={'msg': 'file (%s) is %s, cannot continue' % (path, prev_state),
'path': path})
diff = initial_diff(path, 'file', prev_state)
changed = module.set_fs_attributes_if_different(file_args, False, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_directory(path, follow, recurse, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# For followed symlinks, we need to operate on the target of the link
if follow and prev_state == 'link':
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
file_args['path'] = path
prev_state = get_state(b_path)
changed = False
diff = initial_diff(path, 'directory', prev_state)
if prev_state == 'absent':
# Create directory and assign permissions to it
if module.check_mode:
return {'changed': True, 'diff': diff}
curpath = ''
try:
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path. We can then walk the appropriate directory
# path to apply attributes.
# Something like mkdir -p with mode applied to all of the newly created directories
for dirname in path.strip('/').split('/'):
curpath = '/'.join([curpath, dirname])
# Remove leading slash if we're creating a relative path
if not os.path.isabs(path):
curpath = curpath.lstrip('/')
b_curpath = to_bytes(curpath, errors='surrogate_or_strict')
if not os.path.exists(b_curpath):
try:
os.mkdir(b_curpath)
changed = True
except OSError as ex:
# Possibly something else created the dir since the os.path.exists
# check above. As long as it's a dir, we don't need to error out.
if not (ex.errno == errno.EEXIST and os.path.isdir(b_curpath)):
raise
tmp_file_args = file_args.copy()
tmp_file_args['path'] = curpath
changed = module.set_fs_attributes_if_different(tmp_file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except Exception as e:
raise AnsibleModuleError(results={'msg': 'There was an issue creating %s as requested:'
' %s' % (curpath, to_native(e)),
'path': path})
return {'path': path, 'changed': changed, 'diff': diff}
elif prev_state != 'directory':
# We already know prev_state is not 'absent', therefore it exists in some form.
raise AnsibleModuleError(results={'msg': '%s already exists as a %s' % (path, prev_state),
'path': path})
#
# previous state == directory
#
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
if recurse:
changed |= recursive_set_attributes(b_path, follow, file_args, mtime, atime)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_symlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
if src is None:
if follow:
# use the current target of the link as the source
src = to_native(os.path.realpath(b_path), errors='strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
if not os.path.islink(b_path) and os.path.isdir(b_path):
relpath = path
else:
b_relpath = os.path.dirname(b_path)
relpath = to_native(b_relpath, errors='strict')
absrc = os.path.join(relpath, src)
b_absrc = to_bytes(absrc, errors='surrogate_or_strict')
if not force and not os.path.exists(b_absrc):
raise AnsibleModuleError(results={'msg': 'src file does not exist, use "force=yes" if you'
' really want to create the link: %s' % absrc,
'path': path, 'src': src})
if prev_state == 'directory':
if not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
elif os.listdir(b_path):
# refuse to replace a directory that has files in it
raise AnsibleModuleError(results={'msg': 'the directory %s is not empty, refusing to'
' convert it' % path,
'path': path})
elif prev_state in ('file', 'hard') and not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
diff = initial_diff(path, 'link', prev_state)
changed = False
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
elif prev_state == 'hard':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link because a hard link exists at destination',
'dest': path, 'src': src})
elif prev_state == 'file':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link because a file exists at destination',
'dest': path, 'src': src})
elif prev_state == 'directory':
changed = True
if os.path.exists(b_path):
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link because a file exists at destination',
'dest': path, 'src': src})
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
os.rmdir(b_path)
os.symlink(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.symlink(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
# Now that we might have created the symlink, get the arguments.
# We need to do it now so we can properly follow the symlink if needed
# because load_file_common_arguments sets 'path' according
# the value of follow and the symlink existence.
file_args = module.load_file_common_arguments(module.params)
# Whenever we create a link to a nonexistent target we know that the nonexistent target
# cannot have any permissions set on it. Skip setting those and emit a warning (the user
# can set follow=False to remove the warning)
if follow and os.path.islink(b_path) and not os.path.exists(file_args['path']):
module.warn('Cannot set fs attributes on a non-existent symlink target. follow should be'
' set to False to avoid this.')
else:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def ensure_hardlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# src is the source of a hardlink. We require it if we are creating a new hardlink.
# We require path in the argument_spec so we know it is present at this point.
if src is None:
raise AnsibleModuleError(results={'msg': 'src is required for creating new hardlinks'})
if not os.path.exists(b_src):
raise AnsibleModuleError(results={'msg': 'src does not exist', 'dest': path, 'src': src})
diff = initial_diff(path, 'hard', prev_state)
changed = False
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
elif prev_state == 'hard':
if not os.stat(b_path).st_ino == os.stat(b_src).st_ino:
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, different hard link exists at destination',
'dest': path, 'src': src})
elif prev_state == 'file':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, %s exists at destination' % prev_state,
'dest': path, 'src': src})
elif prev_state == 'directory':
changed = True
if os.path.exists(b_path):
if os.stat(b_path).st_ino == os.stat(b_src).st_ino:
return {'path': path, 'changed': False}
elif not force:
raise AnsibleModuleError(results={'msg': 'Cannot link: different hard link exists at destination',
'dest': path, 'src': src})
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
if os.path.exists(b_path):
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise
os.link(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.link(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def main():
global module
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', choices=['absent', 'directory', 'file', 'hard', 'link', 'touch']),
path=dict(type='path', required=True, aliases=['dest', 'name']),
_original_basename=dict(type='str'), # Internal use only, for recursive ops
recurse=dict(type='bool', default=False),
force=dict(type='bool', default=False), # Note: Should not be in file_common_args in future
follow=dict(type='bool', default=True), # Note: Different default than file_common_args
_diff_peek=dict(type='str'), # Internal use only, for internal checks in the action plugins
src=dict(type='path'), # Note: Should not be in file_common_args in future
modification_time=dict(type='str'),
modification_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
access_time=dict(type='str'),
access_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
),
add_file_common_args=True,
supports_check_mode=True,
)
# When we rewrite basic.py, we will do something similar to this on instantiating an AnsibleModule
sys.excepthook = _ansible_excepthook
additional_parameter_handling(module.params)
params = module.params
state = params['state']
recurse = params['recurse']
force = params['force']
follow = params['follow']
path = params['path']
src = params['src']
timestamps = {}
timestamps['modification_time'] = keep_backward_compatibility_on_timestamps(params['modification_time'], state)
timestamps['modification_time_format'] = params['modification_time_format']
timestamps['access_time'] = keep_backward_compatibility_on_timestamps(params['access_time'], state)
timestamps['access_time_format'] = params['access_time_format']
# short-circuit for diff_peek
if params['_diff_peek'] is not None:
appears_binary = execute_diff_peek(to_bytes(path, errors='surrogate_or_strict'))
module.exit_json(path=path, changed=False, appears_binary=appears_binary)
if state == 'file':
result = ensure_file_attributes(path, follow, timestamps)
elif state == 'directory':
result = ensure_directory(path, follow, recurse, timestamps)
elif state == 'link':
result = ensure_symlink(path, src, follow, force, timestamps)
elif state == 'hard':
result = ensure_hardlink(path, src, follow, force, timestamps)
elif state == 'touch':
result = execute_touch(path, follow, timestamps)
elif state == 'absent':
result = ensure_absent(path)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,433 |
Template module shows all rows as difference.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Template module shows all rows as difference when `STRING_CONVERSION_ACTION=error`. It's seem that Template module lost `--- before` information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Template Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
$
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
STRING_CONVERSION_ACTION(/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg) = error
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
target OS versions: CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. prepare dummy.txt on target os.
```command
# echo hello > /root/dummy.txt
```
2. ansible-playbook
<!--- Paste example playbooks or commands between quotes below -->
```command
$ ansible-playbook -i hosts site.yml --diff --check
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [192.168.0.196]
TASK [template] ****************************************************************************************************************************************************************************************************
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-6716163hjzxdi/tmp9ta05f8n/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196]
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
```yaml
$ cat site.yml
- name: deploy by template module
hosts: all
tasks:
- template:
src: dummy.txt
dest: /root/dummy.txt
```
```dummy.txt
$ cat dummy.txt
hello
world
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Show only modified row.
```
TASK [template] ****************************************************************************************************************************************************************************************************
--- before: /root/dummy.txt
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-67303hkyg6c4n/tmpl13oaaoa/dummy.txt
@@ -1 +1,2 @@
hello
+world
changed: [192.168.0.196]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i hosts site.yml --diff --check -vvvv
ansible-playbook 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible-playbook
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
Using /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
script declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
auto declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
Parsed /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: site.yml *************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
check: True
diff: True
inventory: ('/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts',)
forks: 5
1 plays in site.yml
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:1
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" && echo ansible-tmp-1563877177.7595708-149449123721876="` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877177.7595708-149449123721876=/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> Attempting python interpreter discovery
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.0.196> (0, b'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python\nENDFOUND\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.0.196> (0, b'{"osrelease_content": "NAME=\\"CentOS Linux\\"\\nVERSION=\\"7 (Core)\\"\\nID=\\"centos\\"\\nID_LIKE=\\"rhel fedora\\"\\nVERSION_ID=\\"7\\"\\nPRETTY_NAME=\\"CentOS Linux 7 (Core)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:centos:centos:7\\"\\nHOME_URL=\\"https://www.centos.org/\\"\\nBUG_REPORT_URL=\\"https://bugs.centos.org/\\"\\n\\nCENTOS_MANTISBT_PROJECT=\\"CentOS-7\\"\\nCENTOS_MANTISBT_PROJECT_VERSION=\\"7\\"\\nREDHAT_SUPPORT_PRODUCT=\\"centos\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7\\"\\n\\n", "platform_dist_result": ["centos", "7.6.1810", "Core"]}\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/system/setup.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 TO /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:9 O:163840 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:10 O:196608 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:11 O:229376 S:23099\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 32768 bytes at 131072\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 9 32768 bytes at 163840\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 10 32768 bytes at 196608\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 11 23099 bytes at 229376\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.6", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "C", "LC_NUMERIC": "C", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "PWD": "/root", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "XDG_SESSION_ID": "27910", "SSH_CLIENT": "192.168.0.116 53653 22", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "HOME": "/root", "LC_ALL": "C", "_": "/usr/bin/python", "SSH_CONNECTION": "192.168.0.116 53653 192.168.0.196 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_vethcb884b7": {"macaddress": "8a:47:31:d5:d3:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethcb884b7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8847:31ff:fed5:d31e"}], "active": true, "speed": 10000}, "ansible_default_ipv4": {"macaddress": "1c:87:2c:41:6e:ce", "network": "192.168.0.0", "mtu": 1500, "broadcast": "192.168.0.255", "alias": "enp4s0", "netmask": "255.255.255.0", "address": "192.168.0.196", "interface": "enp4s0", "type": "ether", "gateway": "192.168.0.3"}, "ansible_swapfree_mb": 7654, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": "centos/swap", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_machine_id": "75e09accf1bb49fa8d70b2de021e00fb", "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "BD9ABFA8-FE30-2EE7-41D2-1C872C416ECE", "ansible_pkg_mgr": "yum", "ansible_vethc03bba1": {"macaddress": "1e:41:f1:a6:31:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethc03bba1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c41:f1ff:fea6:31ff"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "", "ansible_veth2726a80": {"macaddress": "4a:02:87:c6:94:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth2726a80", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4802:87ff:fec6:9414"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::1e87:2cff:fe41:6ece", "fe80::fcb2:74ff:feb8:ed7a", "fe80::1c41:f1ff:fea6:31ff", "fe80::a0e7:6fff:fea1:a0d4", "fe80::4802:87ff:fec6:9414", "fe80::42:20ff:fe23:46cf", "fe80::8847:31ff:fed5:d31e"], "ansible_uptime_seconds": 6684898, "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "System Serial Number", "ansible_form_factor": "Desktop", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:20:23:46:cf", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [requested on]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [requested on]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": ["veth2726a80", "vethc03bba1", "veth5af9b4f", "veth27f581f", "vethcb884b7"], "id": "8000.0242202346cf", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "172.17.255.255", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::42:20ff:fe23:46cf"}], "active": true, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "1", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "2", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "3", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "4", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "5", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "6", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "7", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAdVLDGWn3TP6UxDh2EOhbblOwKh9nc8rDSSYZ33sc9SQIPhmYsGGnP62cC5Fm4uVe14lBF0Thf8IZCMIYpuLY=", "ansible_user_gid": 0, "ansible_system_vendor": "ASUS", "ansible_swaptotal_mb": 7935, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDyDmjz8qJ/JvPuvlZi3qiIT1vBBciJiargJHLs8ccywFNcVbJiXj/3fibFoE2VISKLtcYtPvxAzMnKeowdPc5BmmTmdKyyvSMTxmbX25lhb9t0LhkFeIUXbhy+j9Wvj6/d39Yuh2zUbIqI5YR/qpssEUeh2z/eROm/jN0lj1TSnhcYxDAe04GvXGBfDCCz1lDW/rX1/JgBIdRYGUyB57BbeS3FlvFxz7NfzBEdAdr+Dvv/oxTd4aoteqx1+Z8pNVKYkDw1nbjMFcZDF9u/uANvwh3p0qw4Nfve5Sit/zkDdkdC+DkpnnR5W+M2O1o7Iyq90AafS4xCqzYG6MDR+Jv/", "ansible_user_gecos": "root", "ansible_processor_threads_per_core": 2, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["192.168.0.196", "172.17.0.1"], "ansible_python_version": "2.7.5", "ansible_product_version": "System Version", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7690, "used": 7491, "free": 199}, "swap": {"cached": 16, "total": 7935, "free": 7654, "used": 281}, "nocache": {"used": 6807, "free": 883}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "host", "ansible_dns": {"nameservers": ["192.168.0.5", "192.168.0.16"], "search": ["work"]}, "ansible_effective_group_id": 0, "ansible_enp4s0": {"macaddress": "1c:87:2c:41:6e:ce", "features": {"tx_checksum_ipv4": "off", "generic_receive_offload": "on", "tx_checksum_ipv6": "off", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off", "highdma": "on [fixed]", "rx_fcs": "off", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "off", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "off", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "off", "rx_checksumming": "on", "tx_tcp_segmentation": "off", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "off [requested on]", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:04:00.0", "module": "r8169", "mtu": 1500, "device": "enp4s0", "promisc": false, "timestamping": ["tx_software", "rx_software", "software"], "ipv4": {"broadcast": "192.168.0.255", "netmask": "255.255.255.0", "network": "192.168.0.0", "address": "192.168.0.196"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1e87:2cff:fe41:6ece"}], "active": true, "speed": 1000, "hw_timestamp_filters": []}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7690, "ansible_device_links": {"masters": {"loop1": ["dm-3"], "loop0": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2"], "dm-3": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"]}, "labels": {}, "ids": {"sr0": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "sda2": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "sda": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "dm-8": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "sda1": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "dm-6": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "dm-7": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "dm-4": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "dm-5": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "dm-2": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "dm-0": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "dm-1": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"]}, "uuids": {"sda1": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"], "dm-2": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"], "dm-0": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"], "dm-8": ["48eda381-df74-4ad8-a63a-46c167bf1144"], "dm-1": ["13900741-5a75-44f6-8848-3325135493d0"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": ["centos/root", "centos/swap"], "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_memfree_mb": 199, "ansible_processor_count": 1, "ansible_hostname": "intra", "ansible_interfaces": ["veth2726a80", "vethcb884b7", "docker0", "lo", "enp4s0", "vethc03bba1", "veth5af9b4f", "veth27f581f"], "ansible_selinux": {"status": "disabled"}, "ansible_fqdn": "ec2-52-213-25-113.eu-west-1.compute.amazonaws.com", "ansible_mounts": [{"block_used": 8256, "uuid": "80fe1d0c-c3c4-4442-a467-f2975fd87ba5", "size_total": 437290033152, "block_total": 106760262, "mount": "/home", "block_available": 106752006, "size_available": 437256216576, "fstype": "xfs", "inode_total": 427249664, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-home", "inode_used": 3, "block_size": 4096, "inode_available": 427249661}, {"block_used": 74602, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "size_total": 517713920, "block_total": 126395, "mount": "/boot", "block_available": 51793, "size_available": 212144128, "fstype": "xfs", "inode_total": 512000, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 361, "block_size": 4096, "inode_available": 511639}, {"block_used": 10291, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "block_available": 2608333, "size_available": 10683731968, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "inode_used": 519, "block_size": 4096, "inode_available": 10484217}, {"block_used": 7070823, "uuid": "ac012a2a-a7f8-425b-911a-9197e611fbfe", "size_total": 53660876800, "block_total": 13100800, "mount": "/", "block_available": 6029977, "size_available": 24698785792, "fstype": "xfs", "inode_total": 52428800, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-root", "inode_used": 146375, "block_size": 4096, "inode_available": 52282425}, {"block_used": 79838, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "block_available": 2538786, "size_available": 10398867456, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "inode_used": 14836, "block_size": 4096, "inode_available": 10469900}, {"block_used": 334234, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "block_available": 2284390, "size_available": 9356861440, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "inode_used": 79705, "block_size": 4096, "inode_available": 10405031}, {"block_used": 42443, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "block_available": 2576181, "size_available": 10552037376, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "inode_used": 7156, "block_size": 4096, "inode_available": 10477580}, {"block_used": 322515, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "block_available": 2296109, "size_available": 9404862464, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "inode_used": 79368, "block_size": 4096, "inode_available": 10405368}], "ansible_nodename": "intra.work", "ansible_lvm": {"pvs": {"/dev/sda2": {"free_g": "0.06", "size_g": "465.27", "vg": "centos"}}, "lvs": {"home": {"size_g": "407.46", "vg": "centos"}, "root": {"size_g": "50.00", "vg": "centos"}, "swap": {"size_g": "7.75", "vg": "centos"}}, "vgs": {"centos": {"free_g": "0.06", "size_g": "465.27", "num_lvs": "3", "num_pvs": "1"}}}, "ansible_domain": "eu-west-1.compute.amazonaws.com", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIEzv1iG3Mak/xFq6KbljB8M4YaTfHo/ZiskvcC9Kz7kV", "ansible_processor_cores": 4, "ansible_bios_version": "2201", "ansible_date_time": {"weekday_number": "2", "iso8601_basic_short": "20190723T191938", "tz": "JST", "weeknumber": "29", "hour": "19", "year": "2019", "minute": "19", "tz_offset": "+0900", "month": "07", "epoch": "1563877178", "iso8601_micro": "2019-07-23T10:19:38.711920Z", "weekday": "\\u706b\\u66dc\\u65e5", "time": "19:19:38", "date": "2019-07-23", "iso8601": "2019-07-23T10:19:38Z", "day": "23", "iso8601_basic": "20190723T191938711851", "second": "38"}, "ansible_distribution_release": "Core", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth27f581f": {"macaddress": "fe:b2:74:b8:ed:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth27f581f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcb2:74ff:feb8:ed7a"}], "active": true, "speed": 10000}, "ansible_product_name": "All Series", "ansible_devices": {"dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "size": "100.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ASUS", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "DRW-24F1ST a", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ATA", "sectors": "976773168", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "uuids": []}, "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2"], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "uuids": []}, "sectors": "975747072", "start": "1026048", "holders": ["centos-root", "centos-swap", "centos-home"], "size": "465.27 GB"}, "sda1": {"sectorsize": 512, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "uuids": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "TOSHIBA DT01ABA0", "wwn": "0x5000039fe0c30158", "holders": [], "size": "465.76 GB"}, "dm-8": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "uuids": ["48eda381-df74-4ad8-a63a-46c167bf1144"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-6": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-7": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "loop1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "4194304", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "2.00 GB"}, "loop0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "100.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "854499328", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "uuids": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "407.46 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "uuids": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "50.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "16252928", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"], "uuids": ["13900741-5a75-44f6-8848-3325135493d0"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "7.75 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "11/26/2014", "ansible_distribution": "CentOS", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth5af9b4f": {"macaddress": "a2:e7:6f:a1:a0:d4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth5af9b4f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a0e7:6fff:fea1:a0d4"}], "active": true, "speed": 10000}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [192.168.0.196]
META: ran handlers
TASK [template] ****************************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:4
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" && echo ansible-tmp-1563877178.840123-58444523661802="` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877178.840123-58444523661802=/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/stat.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8068\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8068 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/root/dummy.txt", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": true, "device_type": 0, "mtime": 1563871385.1164613, "block_size": 4096, "inode": 201591273, "isgid": false, "size": 6, "executable": false, "isuid": false, "readable": true, "version": "889946615", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/root/dummy.txt", "xusr": false, "atime": 1563871388.064355, "isdir": false, "ctime": 1563871385.1404603, "isblk": false, "wgrp": false, "checksum": "9591818c07e900db7e1e0bc4b884c945e6a61b24", "dev": 64768, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": []}, "changed": false}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/file.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:12803\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 12803 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (1, b'\r\n{"msg": "argument _diff_peek is of type <type \'bool\'> and we were unable to convert to str: Quote the entire value to ensure it does not change.", "failed": true, "exception": "WARNING: The below traceback may *not* be related to the actual failure.\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1780, in _check_argument_types\\n param[k] = type_checker(value)\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1631, in _check_type_str\\n raise TypeError(to_native(msg))\\n", "invocation": {"module_args": {"force": false, "recurse": false, "access_time_format": "%Y%m%d%H%M.%S", "_diff_peek": true, "modification_time_format": "%Y%m%d%H%M.%S", "path": "/root/dummy.txt", "follow": true}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> Failed to connect to the host via ssh: OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/mhimuro/.ssh/config
debug1: /Users/mhimuro/.ssh/config line 25: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug2: resolve_canonicalize: hostname 192.168.0.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 66322
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.0.196 closed.
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196] => {
"changed": true,
"diff": [
{
"after": "hello\nworld\n",
"after_header": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt",
"before": ""
}
],
"invocation": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"module_args": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
},
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
|
https://github.com/ansible/ansible/issues/59433
|
https://github.com/ansible/ansible/pull/60428
|
9a51dff0b17f01bcb280a438ecfe785e5fda4541
|
9b7198d25ecf084b6a465ba445efd426022265c3
| 2019-07-23T10:39:00Z |
python
| 2020-01-17T21:02:28Z |
lib/ansible/plugins/action/__init__.py
|
# coding: utf-8
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import json
import os
import random
import re
import stat
import tempfile
import time
from abc import ABCMeta, abstractmethod
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleActionSkip, AnsibleActionFail
from ansible.executor.module_common import modify_module
from ansible.executor.interpreter_discovery import discover_interpreter, InterpreterDiscoveryRequiredError
from ansible.module_utils.common._collections_compat import Sequence
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.six import binary_type, string_types, text_type, iteritems, with_metaclass
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.utils.jsonify import jsonify
from ansible.release import __version__
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var, AnsibleUnsafeText
from ansible.vars.clean import remove_internal_keys
display = Display()
class ActionBase(with_metaclass(ABCMeta, object)):
'''
This class is the base class for all action plugins, and defines
code common to all actions. The base class handles the connection
by putting/getting files and executing commands based on the current
action in use.
'''
# A set of valid arguments
_VALID_ARGS = frozenset([])
def __init__(self, task, connection, play_context, loader, templar, shared_loader_obj):
self._task = task
self._connection = connection
self._play_context = play_context
self._loader = loader
self._templar = templar
self._shared_loader_obj = shared_loader_obj
self._cleanup_remote_tmp = False
self._supports_check_mode = True
self._supports_async = False
# interpreter discovery state
self._discovered_interpreter_key = None
self._discovered_interpreter = False
self._discovery_deprecation_warnings = []
self._discovery_warnings = []
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
self._used_interpreter = None
@abstractmethod
def run(self, tmp=None, task_vars=None):
""" Action Plugins should implement this method to perform their
tasks. Everything else in this base class is a helper method for the
action plugin to do that.
:kwarg tmp: Deprecated parameter. This is no longer used. An action plugin that calls
another one and wants to use the same remote tmp for both should set
self._connection._shell.tmpdir rather than this parameter.
:kwarg task_vars: The variables (host vars, group vars, config vars,
etc) associated with this task.
:returns: dictionary of results from the module
Implementors of action modules may find the following variables especially useful:
* Module parameters. These are stored in self._task.args
"""
result = {}
if tmp is not None:
result['warning'] = ['ActionModule.run() no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir']
del tmp
if self._task.async_val and not self._supports_async:
raise AnsibleActionFail('async is not supported for this task.')
elif self._play_context.check_mode and not self._supports_check_mode:
raise AnsibleActionSkip('check mode is not supported for this task.')
elif self._task.async_val and self._play_context.check_mode:
raise AnsibleActionFail('check mode and async cannot be used on same task.')
# Error if invalid argument is passed
if self._VALID_ARGS:
task_opts = frozenset(self._task.args.keys())
bad_opts = task_opts.difference(self._VALID_ARGS)
if bad_opts:
raise AnsibleActionFail('Invalid options for %s: %s' % (self._task.action, ','.join(list(bad_opts))))
if self._connection._shell.tmpdir is None and self._early_needs_tmp_path():
self._make_tmp_path()
return result
def cleanup(self, force=False):
"""Method to perform a clean up at the end of an action plugin execution
By default this is designed to clean up the shell tmpdir, and is toggled based on whether
async is in use
Action plugins may override this if they deem necessary, but should still call this method
via super
"""
if force or not self._task.async_val:
self._remove_tmp_path(self._connection._shell.tmpdir)
def get_plugin_option(self, plugin, option, default=None):
"""Helper to get an option from a plugin without having to use
the try/except dance everywhere to set a default
"""
try:
return plugin.get_option(option)
except (AttributeError, KeyError):
return default
def get_become_option(self, option, default=None):
return self.get_plugin_option(self._connection.become, option, default=default)
def get_connection_option(self, option, default=None):
return self.get_plugin_option(self._connection, option, default=default)
def get_shell_option(self, option, default=None):
return self.get_plugin_option(self._connection._shell, option, default=default)
def _remote_file_exists(self, path):
cmd = self._connection._shell.exists(path)
result = self._low_level_execute_command(cmd=cmd, sudoable=True)
if result['rc'] == 0:
return True
return False
def _configure_module(self, module_name, module_args, task_vars=None):
'''
Handles the loading and templating of the module code through the
modify_module() function.
'''
if task_vars is None:
task_vars = dict()
# Search module path(s) for named module.
for mod_type in self._connection.module_implementation_preferences:
# Check to determine if PowerShell modules are supported, and apply
# some fixes (hacks) to module name + args.
if mod_type == '.ps1':
# win_stat, win_file, and win_copy are not just like their
# python counterparts but they are compatible enough for our
# internal usage
if module_name in ('stat', 'file', 'copy') and self._task.action != module_name:
module_name = 'win_%s' % module_name
# Remove extra quotes surrounding path parameters before sending to module.
if module_name in ('win_stat', 'win_file', 'win_copy', 'slurp') and module_args and hasattr(self._connection._shell, '_unquote'):
for key in ('src', 'dest', 'path'):
if key in module_args:
module_args[key] = self._connection._shell._unquote(module_args[key])
module_path = self._shared_loader_obj.module_loader.find_plugin(module_name, mod_type, collection_list=self._task.collections)
if module_path:
break
else: # This is a for-else: http://bit.ly/1ElPkyg
raise AnsibleError("The module %s was not found in configured module paths" % (module_name))
# insert shared code and arguments into the module
final_environment = dict()
self._compute_environment_string(final_environment)
become_kwargs = {}
if self._connection.become:
become_kwargs['become'] = True
become_kwargs['become_method'] = self._connection.become.name
become_kwargs['become_user'] = self._connection.become.get_option('become_user',
playcontext=self._play_context)
become_kwargs['become_password'] = self._connection.become.get_option('become_pass',
playcontext=self._play_context)
become_kwargs['become_flags'] = self._connection.become.get_option('become_flags',
playcontext=self._play_context)
# modify_module will exit early if interpreter discovery is required; re-run after if necessary
for dummy in (1, 2):
try:
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, self._templar,
task_vars=task_vars,
module_compression=self._play_context.module_compression,
async_timeout=self._task.async_val,
environment=final_environment,
**become_kwargs)
break
except InterpreterDiscoveryRequiredError as idre:
self._discovered_interpreter = AnsibleUnsafeText(discover_interpreter(
action=self,
interpreter_name=idre.interpreter_name,
discovery_mode=idre.discovery_mode,
task_vars=task_vars))
# update the local task_vars with the discovered interpreter (which might be None);
# we'll propagate back to the controller in the task result
discovered_key = 'discovered_interpreter_%s' % idre.interpreter_name
# store in local task_vars facts collection for the retry and any other usages in this worker
if task_vars.get('ansible_facts') is None:
task_vars['ansible_facts'] = {}
task_vars['ansible_facts'][discovered_key] = self._discovered_interpreter
# preserve this so _execute_module can propagate back to controller as a fact
self._discovered_interpreter_key = discovered_key
return (module_style, module_shebang, module_data, module_path)
def _compute_environment_string(self, raw_environment_out=None):
'''
Builds the environment string to be used when executing the remote task.
'''
final_environment = dict()
if self._task.environment is not None:
environments = self._task.environment
if not isinstance(environments, list):
environments = [environments]
# The order of environments matters to make sure we merge
# in the parent's values first so those in the block then
# task 'win' in precedence
for environment in environments:
if environment is None or len(environment) == 0:
continue
temp_environment = self._templar.template(environment)
if not isinstance(temp_environment, dict):
raise AnsibleError("environment must be a dictionary, received %s (%s)" % (temp_environment, type(temp_environment)))
# very deliberately using update here instead of combine_vars, as
# these environment settings should not need to merge sub-dicts
final_environment.update(temp_environment)
if len(final_environment) > 0:
final_environment = self._templar.template(final_environment)
if isinstance(raw_environment_out, dict):
raw_environment_out.clear()
raw_environment_out.update(final_environment)
return self._connection._shell.env_prefix(**final_environment)
def _early_needs_tmp_path(self):
'''
Determines if a tmp path should be created before the action is executed.
'''
return getattr(self, 'TRANSFERS_FILES', False)
def _is_pipelining_enabled(self, module_style, wrap_async=False):
'''
Determines if we are required and can do pipelining
'''
# any of these require a true
for condition in [
self._connection.has_pipelining,
self._play_context.pipelining or self._connection.always_pipeline_modules, # pipelining enabled for play or connection requires it (eg winrm)
module_style == "new", # old style modules do not support pipelining
not C.DEFAULT_KEEP_REMOTE_FILES, # user wants remote files
not wrap_async or self._connection.always_pipeline_modules, # async does not normally support pipelining unless it does (eg winrm)
(self._connection.become.name if self._connection.become else '') != 'su', # su does not work with pipelining,
# FIXME: we might need to make become_method exclusion a configurable list
]:
if not condition:
return False
return True
def _get_admin_users(self):
'''
Returns a list of admin users that are configured for the current shell
plugin
'''
return self.get_shell_option('admin_users', ['root'])
def _get_remote_user(self):
''' consistently get the 'remote_user' for the action plugin '''
# TODO: use 'current user running ansible' as fallback when moving away from play_context
# pwd.getpwuid(os.getuid()).pw_name
remote_user = None
try:
remote_user = self._connection.get_option('remote_user')
except KeyError:
# plugin does not have remote_user option, fallback to default and/play_context
remote_user = getattr(self._connection, 'default_user', None) or self._play_context.remote_user
except AttributeError:
# plugin does not use config system, fallback to old play_context
remote_user = self._play_context.remote_user
return remote_user
def _is_become_unprivileged(self):
'''
The user is not the same as the connection user and is not part of the
shell configured admin users
'''
# if we don't use become then we know we aren't switching to a
# different unprivileged user
if not self._connection.become:
return False
# if we use become and the user is not an admin (or same user) then
# we need to return become_unprivileged as True
admin_users = self._get_admin_users()
remote_user = self._get_remote_user()
become_user = self.get_become_option('become_user')
return bool(become_user and become_user not in admin_users + [remote_user])
def _make_tmp_path(self, remote_user=None):
'''
Create and return a temporary path on a remote box.
'''
become_unprivileged = self._is_become_unprivileged()
remote_tmp = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
# deal with tmpdir creation
basefile = 'ansible-tmp-%s-%s' % (time.time(), random.randint(0, 2**48))
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
if getattr(self._connection, '_remote_is_local', False):
tmpdir = C.DEFAULT_LOCAL_TMP
else:
tmpdir = self._remote_expand_user(remote_tmp, sudoable=False)
cmd = self._connection._shell.mkdtemp(basefile=basefile, system=become_unprivileged, tmpdir=tmpdir)
result = self._low_level_execute_command(cmd, sudoable=False)
# error handling on this seems a little aggressive?
if result['rc'] != 0:
if result['rc'] == 5:
output = 'Authentication failure.'
elif result['rc'] == 255 and self._connection.transport in ('ssh',):
if self._play_context.verbosity > 3:
output = u'SSH encountered an unknown error. The output was:\n%s%s' % (result['stdout'], result['stderr'])
else:
output = (u'SSH encountered an unknown error during the connection. '
'We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue')
elif u'No space left on device' in result['stderr']:
output = result['stderr']
else:
output = ('Authentication or permission failure. '
'In some cases, you may have been able to authenticate and did not have permissions on the target directory. '
'Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp". '
'Failed command was: %s, exited with result %d' % (cmd, result['rc']))
if 'stdout' in result and result['stdout'] != u'':
output = output + u", stdout output: %s" % result['stdout']
if self._play_context.verbosity > 3 and 'stderr' in result and result['stderr'] != u'':
output += u", stderr output: %s" % result['stderr']
raise AnsibleConnectionFailure(output)
else:
self._cleanup_remote_tmp = True
try:
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
except IndexError:
# stdout was empty or just space, set to / to trigger error in next if
rc = '/'
# Catch failure conditions, files should never be
# written to locations in /.
if rc == '/':
raise AnsibleError('failed to resolve remote temporary directory from %s: `%s` returned empty string' % (basefile, cmd))
self._connection._shell.tmpdir = rc
return rc
def _should_remove_tmp_path(self, tmp_path):
'''Determine if temporary path should be deleted or kept by user request/config'''
return tmp_path and self._cleanup_remote_tmp and not C.DEFAULT_KEEP_REMOTE_FILES and "-tmp-" in tmp_path
def _remove_tmp_path(self, tmp_path):
'''Remove a temporary path we created. '''
if tmp_path is None and self._connection._shell.tmpdir:
tmp_path = self._connection._shell.tmpdir
if self._should_remove_tmp_path(tmp_path):
cmd = self._connection._shell.remove(tmp_path, recurse=True)
# If we have gotten here we have a working ssh configuration.
# If ssh breaks we could leave tmp directories out on the remote system.
tmp_rm_res = self._low_level_execute_command(cmd, sudoable=False)
if tmp_rm_res.get('rc', 0) != 0:
display.warning('Error deleting remote temporary files (rc: %s, stderr: %s})'
% (tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))
else:
self._connection._shell.tmpdir = None
def _transfer_file(self, local_path, remote_path):
"""
Copy a file from the controller to a remote path
:arg local_path: Path on controller to transfer
:arg remote_path: Path on the remote system to transfer into
.. warning::
* When you use this function you likely want to use use fixup_perms2() on the
remote_path to make sure that the remote file is readable when the user becomes
a non-privileged user.
* If you use fixup_perms2() on the file and copy or move the file into place, you will
need to then remove filesystem acls on the file once it has been copied into place by
the module. See how the copy module implements this for help.
"""
self._connection.put_file(local_path, remote_path)
return remote_path
def _transfer_data(self, remote_path, data):
'''
Copies the module data out to the temporary module path.
'''
if isinstance(data, dict):
data = jsonify(data)
afd, afile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
afo = os.fdopen(afd, 'wb')
try:
data = to_bytes(data, errors='surrogate_or_strict')
afo.write(data)
except Exception as e:
raise AnsibleError("failure writing module data to temporary file for transfer: %s" % to_native(e))
afo.flush()
afo.close()
try:
self._transfer_file(afile, remote_path)
finally:
os.unlink(afile)
return remote_path
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
We need the files we upload to be readable (and sometimes executable)
by the user being sudo'd to but we want to limit other people's access
(because the files could contain passwords or other private
information. We achieve this in one of these ways:
* If no sudo is performed or the remote_user is sudo'ing to
themselves, we don't have to change permissions.
* If the remote_user sudo's to a privileged user (for instance, root),
we don't have to change permissions
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is
privileged or the remote systems allows chown calls by unprivileged
users (e.g. HP-UX)
* If the chown fails we can set the file to be world readable so that
the second unprivileged user can read the file.
Since this could allow other users to get access to private
information we only do this if ansible is configured with
"allow_world_readable_tmpfiles" in the ansible.cfg
"""
if remote_user is None:
remote_user = self._get_remote_user()
if getattr(self._connection._shell, "_IS_WINDOWS", False):
# This won't work on Powershell as-is, so we'll just completely skip until
# we have a need for it, at which point we'll have to do something different.
return remote_paths
if self._is_become_unprivileged():
# Unprivileged user that's different than the ssh user. Let's get
# to work!
# Try to use file system acls to make the files readable for sudo'd
# user
if execute:
chmod_mode = 'rx'
setfacl_mode = 'r-x'
else:
chmod_mode = 'rX'
# NOTE: this form fails silently on freebsd. We currently
# never call _fixup_perms2() with execute=False but if we
# start to we'll have to fix this.
setfacl_mode = 'r-X'
res = self._remote_set_user_facl(remote_paths, self.get_become_option('become_user'), setfacl_mode)
if res['rc'] != 0:
# File system acls failed; let's try to use chown next
# Set executable bit first as on some systems an
# unprivileged user can use chown
if execute:
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError('Failed to set file mode on remote temporary files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
res = self._remote_chown(remote_paths, self.get_become_option('become_user'))
if res['rc'] != 0 and remote_user in self._get_admin_users():
# chown failed even if remote_user is administrator/root
raise AnsibleError('Failed to change ownership of the temporary files Ansible needs to create despite connecting as a privileged user. '
'Unprivileged become user would be unable to read the file.')
elif res['rc'] != 0:
if C.ALLOW_WORLD_READABLE_TMPFILES:
# chown and fs acls failed -- do things this insecure
# way only if the user opted in in the config file
display.warning('Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. '
'This may be insecure. For information on securing this, see '
'https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user')
res = self._remote_chmod(remote_paths, 'a+%s' % chmod_mode)
if res['rc'] != 0:
raise AnsibleError('Failed to set file mode on remote files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
else:
raise AnsibleError('Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user '
'(rc: %s, err: %s}). For information on working around this, see '
'https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user'
% (res['rc'], to_native(res['stderr'])))
elif execute:
# Can't depend on the file being transferred with execute permissions.
# Only need user perms because no become was used here
res = self._remote_chmod(remote_paths, 'u+x')
if res['rc'] != 0:
raise AnsibleError('Failed to set execute bit on remote files (rc: {0}, err: {1})'.format(res['rc'], to_native(res['stderr'])))
return remote_paths
def _remote_chmod(self, paths, mode, sudoable=False):
'''
Issue a remote chmod command
'''
cmd = self._connection._shell.chmod(paths, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_chown(self, paths, user, sudoable=False):
'''
Issue a remote chown command
'''
cmd = self._connection._shell.chown(paths, user)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _remote_set_user_facl(self, paths, user, mode, sudoable=False):
'''
Issue a remote call to setfacl
'''
cmd = self._connection._shell.set_user_facl(paths, user, mode)
res = self._low_level_execute_command(cmd, sudoable=sudoable)
return res
def _execute_remote_stat(self, path, all_vars, follow, tmp=None, checksum=True):
'''
Get information from remote file.
'''
if tmp is not None:
display.warning('_execute_remote_stat no longer honors the tmp parameter. Action'
' plugins should set self._connection._shell.tmpdir to share'
' the tmpdir')
del tmp # No longer used
module_args = dict(
path=path,
follow=follow,
get_checksum=checksum,
checksum_algorithm='sha1',
)
mystat = self._execute_module(module_name='stat', module_args=module_args, task_vars=all_vars,
wrap_async=False)
if mystat.get('failed'):
msg = mystat.get('module_stderr')
if not msg:
msg = mystat.get('module_stdout')
if not msg:
msg = mystat.get('msg')
raise AnsibleError('Failed to get information on remote file (%s): %s' % (path, msg))
if not mystat['stat']['exists']:
# empty might be matched, 1 should never match, also backwards compatible
mystat['stat']['checksum'] = '1'
# happens sometimes when it is a dir and not on bsd
if 'checksum' not in mystat['stat']:
mystat['stat']['checksum'] = ''
elif not isinstance(mystat['stat']['checksum'], string_types):
raise AnsibleError("Invalid checksum returned by stat: expected a string type but got %s" % type(mystat['stat']['checksum']))
return mystat['stat']
def _remote_checksum(self, path, all_vars, follow=False):
'''
Produces a remote checksum given a path,
Returns a number 0-4 for specific errors instead of checksum, also ensures it is different
0 = unknown error
1 = file does not exist, this might not be an error
2 = permissions issue
3 = its a directory, not a file
4 = stat module failed, likely due to not finding python
5 = appropriate json module not found
'''
x = "0" # unknown error has occurred
try:
remote_stat = self._execute_remote_stat(path, all_vars, follow=follow)
if remote_stat['exists'] and remote_stat['isdir']:
x = "3" # its a directory not a file
else:
x = remote_stat['checksum'] # if 1, file is missing
except AnsibleError as e:
errormsg = to_text(e)
if errormsg.endswith(u'Permission denied'):
x = "2" # cannot read file
elif errormsg.endswith(u'MODULE FAILURE'):
x = "4" # python not found or module uncaught exception
elif 'json' in errormsg:
x = "5" # json module needed
finally:
return x # pylint: disable=lost-exception
def _remote_expand_user(self, path, sudoable=True, pathsep=None):
''' takes a remote path and performs tilde/$HOME expansion on the remote host '''
# We only expand ~/path and ~username/path
if not path.startswith('~'):
return path
# Per Jborean, we don't have to worry about Windows as we don't have a notion of user's home
# dir there.
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
# Network connection plugins (network_cli, netconf, etc.) execute on the controller, rather than the remote host.
# As such, we want to avoid using remote_user for paths as remote_user may not line up with the local user
# This is a hack and should be solved by more intelligent handling of remote_tmp in 2.7
become_user = self.get_become_option('become_user')
if getattr(self._connection, '_remote_is_local', False):
pass
elif sudoable and self._connection.become and become_user:
expand_path = '~%s' % become_user
else:
# use remote user instead, if none set default to current user
expand_path = '~%s' % (self._get_remote_user() or '')
# use shell to construct appropriate command and execute
cmd = self._connection._shell.expand_user(expand_path)
data = self._low_level_execute_command(cmd, sudoable=False)
try:
initial_fragment = data['stdout'].strip().splitlines()[-1]
except IndexError:
initial_fragment = None
if not initial_fragment:
# Something went wrong trying to expand the path remotely. Try using pwd, if not, return
# the original string
cmd = self._connection._shell.pwd()
pwd = self._low_level_execute_command(cmd, sudoable=False).get('stdout', '').strip()
if pwd:
expanded = pwd
else:
expanded = path
elif len(split_path) > 1:
expanded = self._connection._shell.join_path(initial_fragment, *split_path[1:])
else:
expanded = initial_fragment
if '..' in os.path.dirname(expanded).split('/'):
raise AnsibleError("'%s' returned an invalid relative home directory path containing '..'" % self._play_context.remote_addr)
return expanded
def _strip_success_message(self, data):
'''
Removes the BECOME-SUCCESS message from the data.
'''
if data.strip().startswith('BECOME-SUCCESS-'):
data = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', data)
return data
def _update_module_args(self, module_name, module_args, task_vars):
# set check mode in the module arguments, if required
if self._play_context.check_mode:
if not self._supports_check_mode:
raise AnsibleError("check mode is not supported for this operation")
module_args['_ansible_check_mode'] = True
else:
module_args['_ansible_check_mode'] = False
# set no log in the module arguments, if required
module_args['_ansible_no_log'] = self._play_context.no_log or C.DEFAULT_NO_TARGET_SYSLOG
# set debug in the module arguments, if required
module_args['_ansible_debug'] = C.DEFAULT_DEBUG
# let module know we are in diff mode
module_args['_ansible_diff'] = self._play_context.diff
# let module know our verbosity
module_args['_ansible_verbosity'] = display.verbosity
# give the module information about the ansible version
module_args['_ansible_version'] = __version__
# give the module information about its name
module_args['_ansible_module_name'] = module_name
# set the syslog facility to be used in the module
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
# let module know about filesystems that selinux treats specially
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
# what to do when parameter values are converted to strings
module_args['_ansible_string_conversion_action'] = C.STRING_CONVERSION_ACTION
# give the module the socket for persistent connections
module_args['_ansible_socket'] = getattr(self._connection, 'socket_path')
if not module_args['_ansible_socket']:
module_args['_ansible_socket'] = task_vars.get('ansible_socket')
# make sure all commands use the designated shell executable
module_args['_ansible_shell_executable'] = self._play_context.executable
# make sure modules are aware if they need to keep the remote files
module_args['_ansible_keep_remote_files'] = C.DEFAULT_KEEP_REMOTE_FILES
# make sure all commands use the designated temporary directory if created
if self._is_become_unprivileged(): # force fallback on remote_tmp as user cannot normally write to dir
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# make sure the remote_tmp value is sent through in case modules needs to create their own
module_args['_ansible_remote_tmp'] = self.get_shell_option('remote_tmp', default='~/.ansible/tmp')
def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=None, wrap_async=False):
'''
Transfer and run a module along with its arguments.
'''
if tmp is not None:
display.warning('_execute_module no longer honors the tmp parameter. Action plugins'
' should set self._connection._shell.tmpdir to share the tmpdir')
del tmp # No longer used
if delete_remote_tmp is not None:
display.warning('_execute_module no longer honors the delete_remote_tmp parameter.'
' Action plugins should check self._connection._shell.tmpdir to'
' see if a tmpdir existed before they were called to determine'
' if they are responsible for removing it.')
del delete_remote_tmp # No longer used
tmpdir = self._connection._shell.tmpdir
# We set the module_style to new here so the remote_tmp is created
# before the module args are built if remote_tmp is needed (async).
# If the module_style turns out to not be new and we didn't create the
# remote tmp here, it will still be created. This must be done before
# calling self._update_module_args() so the module wrapper has the
# correct remote_tmp value set
if not self._is_pipelining_enabled("new", wrap_async) and tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
if task_vars is None:
task_vars = dict()
# if a module name was not specified for this execution, use the action from the task
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
self._update_module_args(module_name, module_args, task_vars)
# FIXME: convert async_wrapper.py to not rely on environment variables
# make sure we get the right async_dir variable, backwards compatibility
# means we need to lookup the env value ANSIBLE_ASYNC_DIR first
remove_async_dir = None
if wrap_async or self._task.async_val:
env_async_dir = [e for e in self._task.environment if
"ANSIBLE_ASYNC_DIR" in e]
if len(env_async_dir) > 0:
msg = "Setting the async dir from the environment keyword " \
"ANSIBLE_ASYNC_DIR is deprecated. Set the async_dir " \
"shell option instead"
self._display.deprecated(msg, "2.12")
else:
# ANSIBLE_ASYNC_DIR is not set on the task, we get the value
# from the shell option and temporarily add to the environment
# list for async_wrapper to pick up
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
remove_async_dir = len(self._task.environment)
self._task.environment.append({"ANSIBLE_ASYNC_DIR": async_dir})
# FUTURE: refactor this along with module build process to better encapsulate "smart wrapper" functionality
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
display.vvv("Using module file %s" % module_path)
if not shebang and module_style != 'binary':
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
self._used_interpreter = shebang
remote_module_path = None
if not self._is_pipelining_enabled(module_style, wrap_async):
# we might need remote tmp dir
if tmpdir is None:
self._make_tmp_path()
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):
# we'll also need a tmp file to hold our module arguments
args_file_path = self._connection._shell.join_path(tmpdir, 'args')
if remote_module_path or module_style != 'new':
display.debug("transferring module to remote %s" % remote_module_path)
if module_style == 'binary':
self._transfer_file(module_path, remote_module_path)
else:
self._transfer_data(remote_module_path, module_data)
if module_style == 'old':
# we need to dump the module args to a k=v string in a file on
# the remote system, which can be read and parsed by the module
args_data = ""
for k, v in iteritems(module_args):
args_data += '%s=%s ' % (k, shlex_quote(text_type(v)))
self._transfer_data(args_file_path, args_data)
elif module_style in ('non_native_want_json', 'binary'):
self._transfer_data(args_file_path, json.dumps(module_args))
display.debug("done transferring module to remote")
environment_string = self._compute_environment_string()
# remove the ANSIBLE_ASYNC_DIR env entry if we added a temporary one for
# the async_wrapper task - this is so the async_status plugin doesn't
# fire a deprecation warning when it runs after this task
if remove_async_dir is not None:
del self._task.environment[remove_async_dir]
remote_files = []
if tmpdir and remote_module_path:
remote_files = [tmpdir, remote_module_path]
if args_file_path:
remote_files.append(args_file_path)
sudoable = True
in_data = None
cmd = ""
if wrap_async and not self._connection.always_pipeline_modules:
# configure, upload, and chmod the async_wrapper module
(async_module_style, shebang, async_module_data, async_module_path) = self._configure_module(module_name='async_wrapper', module_args=dict(),
task_vars=task_vars)
async_module_remote_filename = self._connection._shell.get_remote_filename(async_module_path)
remote_async_module_path = self._connection._shell.join_path(tmpdir, async_module_remote_filename)
self._transfer_data(remote_async_module_path, async_module_data)
remote_files.append(remote_async_module_path)
async_limit = self._task.async_val
async_jid = str(random.randint(0, 999999999999))
# call the interpreter for async_wrapper directly
# this permits use of a script for an interpreter on non-Linux platforms
# TODO: re-implement async_wrapper as a regular module to avoid this special case
interpreter = shebang.replace('#!', '').strip()
async_cmd = [interpreter, remote_async_module_path, async_jid, async_limit, remote_module_path]
if environment_string:
async_cmd.insert(0, environment_string)
if args_file_path:
async_cmd.append(args_file_path)
else:
# maintain a fixed number of positional parameters for async_wrapper
async_cmd.append('_')
if not self._should_remove_tmp_path(tmpdir):
async_cmd.append("-preserve_tmp")
cmd = " ".join(to_text(x) for x in async_cmd)
else:
if self._is_pipelining_enabled(module_style):
in_data = module_data
display.vvv("Pipelining is enabled.")
else:
cmd = remote_module_path
cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path).strip()
# Fix permissions of the tmpdir path and tmpdir files. This should be called after all
# files have been transferred.
if remote_files:
# remove none/empty
remote_files = [x for x in remote_files if x]
self._fixup_perms2(remote_files, self._get_remote_user())
# actually execute
res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)
# parse the main result
data = self._parse_returned_data(res)
# NOTE: INTERNAL KEYS ONLY ACCESSIBLE HERE
# get internal info before cleaning
if data.pop("_ansible_suppress_tmpdir_delete", False):
self._cleanup_remote_tmp = False
# NOTE: yum returns results .. but that made it 'compatible' with squashing, so we allow mappings, for now
if 'results' in data and (not isinstance(data['results'], Sequence) or isinstance(data['results'], string_types)):
data['ansible_module_results'] = data['results']
del data['results']
display.warning("Found internal 'results' key in module return, renamed to 'ansible_module_results'.")
# remove internal keys
remove_internal_keys(data)
if wrap_async:
# async_wrapper will clean up its tmpdir on its own so we want the controller side to
# forget about it now
self._connection._shell.tmpdir = None
# FIXME: for backwards compat, figure out if still makes sense
data['changed'] = True
# pre-split stdout/stderr into lines if needed
if 'stdout' in data and 'stdout_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stdout', None) or u''
data['stdout_lines'] = txt.splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
# if the value is 'False', a default won't catch it.
txt = data.get('stderr', None) or u''
data['stderr_lines'] = txt.splitlines()
# propagate interpreter discovery results back to the controller
if self._discovered_interpreter_key:
if data.get('ansible_facts') is None:
data['ansible_facts'] = {}
data['ansible_facts'][self._discovered_interpreter_key] = self._discovered_interpreter
if self._discovery_warnings:
if data.get('warnings') is None:
data['warnings'] = []
data['warnings'].extend(self._discovery_warnings)
if self._discovery_deprecation_warnings:
if data.get('deprecations') is None:
data['deprecations'] = []
data['deprecations'].extend(self._discovery_deprecation_warnings)
# mark the entire module results untrusted as a template right here, since the current action could
# possibly template one of these values.
data = wrap_var(data)
display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))
return data
def _parse_returned_data(self, res):
try:
filtered_output, warnings = _filter_non_json_lines(res.get('stdout', u''))
for w in warnings:
display.warning(w)
data = json.loads(filtered_output)
data['_ansible_parsed'] = True
except ValueError:
# not valid json, lets try to capture error
data = dict(failed=True, _ansible_parsed=False)
data['module_stdout'] = res.get('stdout', u'')
if 'stderr' in res:
data['module_stderr'] = res['stderr']
if res['stderr'].startswith(u'Traceback'):
data['exception'] = res['stderr']
# in some cases a traceback will arrive on stdout instead of stderr, such as when using ssh with -tt
if 'exception' not in data and data['module_stdout'].startswith(u'Traceback'):
data['exception'] = data['module_stdout']
# The default
data['msg'] = "MODULE FAILURE"
# try to figure out if we are missing interpreter
if self._used_interpreter is not None:
match = re.compile('%s: (?:No such file or directory|not found)' % self._used_interpreter.lstrip('!#'))
if match.search(data['module_stderr']) or match.search(data['module_stdout']):
data['msg'] = "The module failed to execute correctly, you probably need to set the interpreter."
# always append hint
data['msg'] += '\nSee stdout/stderr for the exact error'
if 'rc' in res:
data['rc'] = res['rc']
return data
# FIXME: move to connection base
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None, executable=None, encoding_errors='surrogate_then_replace', chdir=None):
'''
This is the function which executes the low level shell command, which
may be commands to create/remove directories for temporary files, or to
run the module code or python directly when pipelining.
:kwarg encoding_errors: If the value returned by the command isn't
utf-8 then we have to figure out how to transform it to unicode.
If the value is just going to be displayed to the user (or
discarded) then the default of 'replace' is fine. If the data is
used as a key or is going to be written back out to a file
verbatim, then this won't work. May have to use some sort of
replacement strategy (python3 could use surrogateescape)
:kwarg chdir: cd into this directory before executing the command.
'''
display.debug("_low_level_execute_command(): starting")
# if not cmd:
# # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
# display.debug("_low_level_execute_command(): no command, exiting")
# return dict(stdout='', stderr='', rc=254)
if chdir:
display.debug("_low_level_execute_command(): changing cwd to %s for this command" % chdir)
cmd = self._connection._shell.append_command('cd %s' % chdir, cmd)
ruser = self._get_remote_user()
buser = self.get_become_option('become_user')
if (sudoable and self._connection.become and # if sudoable and have become
self._connection.transport != 'network_cli' and # if not using network_cli
(C.BECOME_ALLOW_SAME_USER or (buser != ruser or not any((ruser, buser))))): # if we allow same user PE or users are different and either is set
display.debug("_low_level_execute_command(): using become for this command")
cmd = self._connection.become.build_become_command(cmd, self._connection._shell)
if self._connection.allow_executable:
if executable is None:
executable = self._play_context.executable
# mitigation for SSH race which can drop stdout (https://github.com/ansible/ansible/issues/13876)
# only applied for the default executable to avoid interfering with the raw action
cmd = self._connection._shell.append_command(cmd, 'sleep 0')
if executable:
cmd = executable + ' -c ' + shlex_quote(cmd)
display.debug("_low_level_execute_command(): executing: %s" % (cmd,))
# Change directory to basedir of task for command execution when connection is local
if self._connection.transport == 'local':
self._connection.cwd = to_bytes(self._loader.get_basedir(), errors='surrogate_or_strict')
rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
# stdout and stderr may be either a file-like or a bytes object.
# Convert either one to a text type
if isinstance(stdout, binary_type):
out = to_text(stdout, errors=encoding_errors)
elif not isinstance(stdout, text_type):
out = to_text(b''.join(stdout.readlines()), errors=encoding_errors)
else:
out = stdout
if isinstance(stderr, binary_type):
err = to_text(stderr, errors=encoding_errors)
elif not isinstance(stderr, text_type):
err = to_text(b''.join(stderr.readlines()), errors=encoding_errors)
else:
err = stderr
if rc is None:
rc = 0
# be sure to remove the BECOME-SUCCESS message now
out = self._strip_success_message(out)
display.debug(u"_low_level_execute_command() done: rc=%d, stdout=%s, stderr=%s" % (rc, out, err))
return dict(rc=rc, stdout=out, stdout_lines=out.splitlines(), stderr=err, stderr_lines=err.splitlines())
def _get_diff_data(self, destination, source, task_vars, source_file=True):
# Note: Since we do not diff the source and destination before we transform from bytes into
# text the diff between source and destination may not be accurate. To fix this, we'd need
# to move the diffing from the callback plugins into here.
#
# Example of data which would cause trouble is src_content == b'\xff' and dest_content ==
# b'\xfe'. Neither of those are valid utf-8 so both get turned into the replacement
# character: diff['before'] = u'�' ; diff['after'] = u'�' When the callback plugin later
# diffs before and after it shows an empty diff.
diff = {}
display.debug("Going to peek to see if file has changed permissions")
peek_result = self._execute_module(module_name='file', module_args=dict(path=destination, _diff_peek=True), task_vars=task_vars, persist_files=True)
if not peek_result.get('failed', False) or peek_result.get('rc', 0) == 0:
if peek_result.get('state') in (None, 'absent'):
diff['before'] = u''
elif peek_result.get('appears_binary'):
diff['dst_binary'] = 1
elif peek_result.get('size') and C.MAX_FILE_SIZE_FOR_DIFF > 0 and peek_result['size'] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['dst_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug(u"Slurping the file %s" % source)
dest_result = self._execute_module(module_name='slurp', module_args=dict(path=destination), task_vars=task_vars, persist_files=True)
if 'content' in dest_result:
dest_contents = dest_result['content']
if dest_result['encoding'] == u'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise AnsibleError("unknown encoding in content option, failed: %s" % to_native(dest_result))
diff['before_header'] = destination
diff['before'] = to_text(dest_contents)
if source_file:
st = os.stat(source)
if C.MAX_FILE_SIZE_FOR_DIFF > 0 and st[stat.ST_SIZE] > C.MAX_FILE_SIZE_FOR_DIFF:
diff['src_larger'] = C.MAX_FILE_SIZE_FOR_DIFF
else:
display.debug("Reading local copy of the file %s" % source)
try:
with open(source, 'rb') as src:
src_contents = src.read()
except Exception as e:
raise AnsibleError("Unexpected error while reading source (%s) for diff: %s " % (source, to_native(e)))
if b"\x00" in src_contents:
diff['src_binary'] = 1
else:
diff['after_header'] = source
diff['after'] = to_text(src_contents)
else:
display.debug(u"source of file passed in")
diff['after_header'] = u'dynamically generated'
diff['after'] = source
if self._play_context.no_log:
if 'before' in diff:
diff["before"] = u""
if 'after' in diff:
diff["after"] = u" [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]\n"
return diff
def _find_needle(self, dirname, needle):
'''
find a needle in haystack of paths, optionally using 'dirname' as a subdir.
This will build the ordered list of paths to search and pass them to dwim
to get back the first existing file found.
'''
# dwim already deals with playbook basedirs
path_stack = self._task.get_search_path()
# if missing it will return a file not found exception
return self._loader.path_dwim_relative_stack(path_stack, dirname, needle)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,433 |
Template module shows all rows as difference.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Template module shows all rows as difference when `STRING_CONVERSION_ACTION=error`. It's seem that Template module lost `--- before` information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Template Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
$
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
STRING_CONVERSION_ACTION(/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg) = error
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
target OS versions: CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. prepare dummy.txt on target os.
```command
# echo hello > /root/dummy.txt
```
2. ansible-playbook
<!--- Paste example playbooks or commands between quotes below -->
```command
$ ansible-playbook -i hosts site.yml --diff --check
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [192.168.0.196]
TASK [template] ****************************************************************************************************************************************************************************************************
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-6716163hjzxdi/tmp9ta05f8n/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196]
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
```yaml
$ cat site.yml
- name: deploy by template module
hosts: all
tasks:
- template:
src: dummy.txt
dest: /root/dummy.txt
```
```dummy.txt
$ cat dummy.txt
hello
world
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Show only modified row.
```
TASK [template] ****************************************************************************************************************************************************************************************************
--- before: /root/dummy.txt
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-67303hkyg6c4n/tmpl13oaaoa/dummy.txt
@@ -1 +1,2 @@
hello
+world
changed: [192.168.0.196]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i hosts site.yml --diff --check -vvvv
ansible-playbook 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible-playbook
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
Using /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
script declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
auto declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
Parsed /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: site.yml *************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
check: True
diff: True
inventory: ('/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts',)
forks: 5
1 plays in site.yml
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:1
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" && echo ansible-tmp-1563877177.7595708-149449123721876="` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877177.7595708-149449123721876=/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> Attempting python interpreter discovery
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.0.196> (0, b'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python\nENDFOUND\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.0.196> (0, b'{"osrelease_content": "NAME=\\"CentOS Linux\\"\\nVERSION=\\"7 (Core)\\"\\nID=\\"centos\\"\\nID_LIKE=\\"rhel fedora\\"\\nVERSION_ID=\\"7\\"\\nPRETTY_NAME=\\"CentOS Linux 7 (Core)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:centos:centos:7\\"\\nHOME_URL=\\"https://www.centos.org/\\"\\nBUG_REPORT_URL=\\"https://bugs.centos.org/\\"\\n\\nCENTOS_MANTISBT_PROJECT=\\"CentOS-7\\"\\nCENTOS_MANTISBT_PROJECT_VERSION=\\"7\\"\\nREDHAT_SUPPORT_PRODUCT=\\"centos\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7\\"\\n\\n", "platform_dist_result": ["centos", "7.6.1810", "Core"]}\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/system/setup.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 TO /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:9 O:163840 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:10 O:196608 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:11 O:229376 S:23099\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 32768 bytes at 131072\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 9 32768 bytes at 163840\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 10 32768 bytes at 196608\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 11 23099 bytes at 229376\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.6", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "C", "LC_NUMERIC": "C", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "PWD": "/root", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "XDG_SESSION_ID": "27910", "SSH_CLIENT": "192.168.0.116 53653 22", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "HOME": "/root", "LC_ALL": "C", "_": "/usr/bin/python", "SSH_CONNECTION": "192.168.0.116 53653 192.168.0.196 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_vethcb884b7": {"macaddress": "8a:47:31:d5:d3:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethcb884b7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8847:31ff:fed5:d31e"}], "active": true, "speed": 10000}, "ansible_default_ipv4": {"macaddress": "1c:87:2c:41:6e:ce", "network": "192.168.0.0", "mtu": 1500, "broadcast": "192.168.0.255", "alias": "enp4s0", "netmask": "255.255.255.0", "address": "192.168.0.196", "interface": "enp4s0", "type": "ether", "gateway": "192.168.0.3"}, "ansible_swapfree_mb": 7654, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": "centos/swap", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_machine_id": "75e09accf1bb49fa8d70b2de021e00fb", "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "BD9ABFA8-FE30-2EE7-41D2-1C872C416ECE", "ansible_pkg_mgr": "yum", "ansible_vethc03bba1": {"macaddress": "1e:41:f1:a6:31:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethc03bba1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c41:f1ff:fea6:31ff"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "", "ansible_veth2726a80": {"macaddress": "4a:02:87:c6:94:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth2726a80", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4802:87ff:fec6:9414"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::1e87:2cff:fe41:6ece", "fe80::fcb2:74ff:feb8:ed7a", "fe80::1c41:f1ff:fea6:31ff", "fe80::a0e7:6fff:fea1:a0d4", "fe80::4802:87ff:fec6:9414", "fe80::42:20ff:fe23:46cf", "fe80::8847:31ff:fed5:d31e"], "ansible_uptime_seconds": 6684898, "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "System Serial Number", "ansible_form_factor": "Desktop", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:20:23:46:cf", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [requested on]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [requested on]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": ["veth2726a80", "vethc03bba1", "veth5af9b4f", "veth27f581f", "vethcb884b7"], "id": "8000.0242202346cf", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "172.17.255.255", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::42:20ff:fe23:46cf"}], "active": true, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "1", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "2", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "3", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "4", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "5", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "6", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "7", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAdVLDGWn3TP6UxDh2EOhbblOwKh9nc8rDSSYZ33sc9SQIPhmYsGGnP62cC5Fm4uVe14lBF0Thf8IZCMIYpuLY=", "ansible_user_gid": 0, "ansible_system_vendor": "ASUS", "ansible_swaptotal_mb": 7935, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDyDmjz8qJ/JvPuvlZi3qiIT1vBBciJiargJHLs8ccywFNcVbJiXj/3fibFoE2VISKLtcYtPvxAzMnKeowdPc5BmmTmdKyyvSMTxmbX25lhb9t0LhkFeIUXbhy+j9Wvj6/d39Yuh2zUbIqI5YR/qpssEUeh2z/eROm/jN0lj1TSnhcYxDAe04GvXGBfDCCz1lDW/rX1/JgBIdRYGUyB57BbeS3FlvFxz7NfzBEdAdr+Dvv/oxTd4aoteqx1+Z8pNVKYkDw1nbjMFcZDF9u/uANvwh3p0qw4Nfve5Sit/zkDdkdC+DkpnnR5W+M2O1o7Iyq90AafS4xCqzYG6MDR+Jv/", "ansible_user_gecos": "root", "ansible_processor_threads_per_core": 2, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["192.168.0.196", "172.17.0.1"], "ansible_python_version": "2.7.5", "ansible_product_version": "System Version", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7690, "used": 7491, "free": 199}, "swap": {"cached": 16, "total": 7935, "free": 7654, "used": 281}, "nocache": {"used": 6807, "free": 883}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "host", "ansible_dns": {"nameservers": ["192.168.0.5", "192.168.0.16"], "search": ["work"]}, "ansible_effective_group_id": 0, "ansible_enp4s0": {"macaddress": "1c:87:2c:41:6e:ce", "features": {"tx_checksum_ipv4": "off", "generic_receive_offload": "on", "tx_checksum_ipv6": "off", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off", "highdma": "on [fixed]", "rx_fcs": "off", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "off", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "off", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "off", "rx_checksumming": "on", "tx_tcp_segmentation": "off", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "off [requested on]", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:04:00.0", "module": "r8169", "mtu": 1500, "device": "enp4s0", "promisc": false, "timestamping": ["tx_software", "rx_software", "software"], "ipv4": {"broadcast": "192.168.0.255", "netmask": "255.255.255.0", "network": "192.168.0.0", "address": "192.168.0.196"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1e87:2cff:fe41:6ece"}], "active": true, "speed": 1000, "hw_timestamp_filters": []}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7690, "ansible_device_links": {"masters": {"loop1": ["dm-3"], "loop0": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2"], "dm-3": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"]}, "labels": {}, "ids": {"sr0": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "sda2": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "sda": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "dm-8": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "sda1": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "dm-6": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "dm-7": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "dm-4": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "dm-5": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "dm-2": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "dm-0": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "dm-1": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"]}, "uuids": {"sda1": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"], "dm-2": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"], "dm-0": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"], "dm-8": ["48eda381-df74-4ad8-a63a-46c167bf1144"], "dm-1": ["13900741-5a75-44f6-8848-3325135493d0"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": ["centos/root", "centos/swap"], "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_memfree_mb": 199, "ansible_processor_count": 1, "ansible_hostname": "intra", "ansible_interfaces": ["veth2726a80", "vethcb884b7", "docker0", "lo", "enp4s0", "vethc03bba1", "veth5af9b4f", "veth27f581f"], "ansible_selinux": {"status": "disabled"}, "ansible_fqdn": "ec2-52-213-25-113.eu-west-1.compute.amazonaws.com", "ansible_mounts": [{"block_used": 8256, "uuid": "80fe1d0c-c3c4-4442-a467-f2975fd87ba5", "size_total": 437290033152, "block_total": 106760262, "mount": "/home", "block_available": 106752006, "size_available": 437256216576, "fstype": "xfs", "inode_total": 427249664, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-home", "inode_used": 3, "block_size": 4096, "inode_available": 427249661}, {"block_used": 74602, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "size_total": 517713920, "block_total": 126395, "mount": "/boot", "block_available": 51793, "size_available": 212144128, "fstype": "xfs", "inode_total": 512000, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 361, "block_size": 4096, "inode_available": 511639}, {"block_used": 10291, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "block_available": 2608333, "size_available": 10683731968, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "inode_used": 519, "block_size": 4096, "inode_available": 10484217}, {"block_used": 7070823, "uuid": "ac012a2a-a7f8-425b-911a-9197e611fbfe", "size_total": 53660876800, "block_total": 13100800, "mount": "/", "block_available": 6029977, "size_available": 24698785792, "fstype": "xfs", "inode_total": 52428800, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-root", "inode_used": 146375, "block_size": 4096, "inode_available": 52282425}, {"block_used": 79838, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "block_available": 2538786, "size_available": 10398867456, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "inode_used": 14836, "block_size": 4096, "inode_available": 10469900}, {"block_used": 334234, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "block_available": 2284390, "size_available": 9356861440, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "inode_used": 79705, "block_size": 4096, "inode_available": 10405031}, {"block_used": 42443, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "block_available": 2576181, "size_available": 10552037376, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "inode_used": 7156, "block_size": 4096, "inode_available": 10477580}, {"block_used": 322515, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "block_available": 2296109, "size_available": 9404862464, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "inode_used": 79368, "block_size": 4096, "inode_available": 10405368}], "ansible_nodename": "intra.work", "ansible_lvm": {"pvs": {"/dev/sda2": {"free_g": "0.06", "size_g": "465.27", "vg": "centos"}}, "lvs": {"home": {"size_g": "407.46", "vg": "centos"}, "root": {"size_g": "50.00", "vg": "centos"}, "swap": {"size_g": "7.75", "vg": "centos"}}, "vgs": {"centos": {"free_g": "0.06", "size_g": "465.27", "num_lvs": "3", "num_pvs": "1"}}}, "ansible_domain": "eu-west-1.compute.amazonaws.com", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIEzv1iG3Mak/xFq6KbljB8M4YaTfHo/ZiskvcC9Kz7kV", "ansible_processor_cores": 4, "ansible_bios_version": "2201", "ansible_date_time": {"weekday_number": "2", "iso8601_basic_short": "20190723T191938", "tz": "JST", "weeknumber": "29", "hour": "19", "year": "2019", "minute": "19", "tz_offset": "+0900", "month": "07", "epoch": "1563877178", "iso8601_micro": "2019-07-23T10:19:38.711920Z", "weekday": "\\u706b\\u66dc\\u65e5", "time": "19:19:38", "date": "2019-07-23", "iso8601": "2019-07-23T10:19:38Z", "day": "23", "iso8601_basic": "20190723T191938711851", "second": "38"}, "ansible_distribution_release": "Core", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth27f581f": {"macaddress": "fe:b2:74:b8:ed:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth27f581f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcb2:74ff:feb8:ed7a"}], "active": true, "speed": 10000}, "ansible_product_name": "All Series", "ansible_devices": {"dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "size": "100.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ASUS", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "DRW-24F1ST a", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ATA", "sectors": "976773168", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "uuids": []}, "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2"], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "uuids": []}, "sectors": "975747072", "start": "1026048", "holders": ["centos-root", "centos-swap", "centos-home"], "size": "465.27 GB"}, "sda1": {"sectorsize": 512, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "uuids": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "TOSHIBA DT01ABA0", "wwn": "0x5000039fe0c30158", "holders": [], "size": "465.76 GB"}, "dm-8": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "uuids": ["48eda381-df74-4ad8-a63a-46c167bf1144"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-6": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-7": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "loop1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "4194304", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "2.00 GB"}, "loop0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "100.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "854499328", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "uuids": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "407.46 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "uuids": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "50.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "16252928", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"], "uuids": ["13900741-5a75-44f6-8848-3325135493d0"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "7.75 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "11/26/2014", "ansible_distribution": "CentOS", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth5af9b4f": {"macaddress": "a2:e7:6f:a1:a0:d4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth5af9b4f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a0e7:6fff:fea1:a0d4"}], "active": true, "speed": 10000}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [192.168.0.196]
META: ran handlers
TASK [template] ****************************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:4
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" && echo ansible-tmp-1563877178.840123-58444523661802="` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877178.840123-58444523661802=/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/stat.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8068\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8068 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/root/dummy.txt", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": true, "device_type": 0, "mtime": 1563871385.1164613, "block_size": 4096, "inode": 201591273, "isgid": false, "size": 6, "executable": false, "isuid": false, "readable": true, "version": "889946615", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/root/dummy.txt", "xusr": false, "atime": 1563871388.064355, "isdir": false, "ctime": 1563871385.1404603, "isblk": false, "wgrp": false, "checksum": "9591818c07e900db7e1e0bc4b884c945e6a61b24", "dev": 64768, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": []}, "changed": false}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/file.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:12803\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 12803 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (1, b'\r\n{"msg": "argument _diff_peek is of type <type \'bool\'> and we were unable to convert to str: Quote the entire value to ensure it does not change.", "failed": true, "exception": "WARNING: The below traceback may *not* be related to the actual failure.\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1780, in _check_argument_types\\n param[k] = type_checker(value)\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1631, in _check_type_str\\n raise TypeError(to_native(msg))\\n", "invocation": {"module_args": {"force": false, "recurse": false, "access_time_format": "%Y%m%d%H%M.%S", "_diff_peek": true, "modification_time_format": "%Y%m%d%H%M.%S", "path": "/root/dummy.txt", "follow": true}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> Failed to connect to the host via ssh: OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/mhimuro/.ssh/config
debug1: /Users/mhimuro/.ssh/config line 25: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug2: resolve_canonicalize: hostname 192.168.0.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 66322
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.0.196 closed.
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196] => {
"changed": true,
"diff": [
{
"after": "hello\nworld\n",
"after_header": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt",
"before": ""
}
],
"invocation": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"module_args": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
},
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
|
https://github.com/ansible/ansible/issues/59433
|
https://github.com/ansible/ansible/pull/60428
|
9a51dff0b17f01bcb280a438ecfe785e5fda4541
|
9b7198d25ecf084b6a465ba445efd426022265c3
| 2019-07-23T10:39:00Z |
python
| 2020-01-17T21:02:28Z |
test/integration/targets/file/tasks/diff_peek.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,433 |
Template module shows all rows as difference.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Template module shows all rows as difference when `STRING_CONVERSION_ACTION=error`. It's seem that Template module lost `--- before` information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Template Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
$
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
STRING_CONVERSION_ACTION(/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg) = error
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
target OS versions: CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. prepare dummy.txt on target os.
```command
# echo hello > /root/dummy.txt
```
2. ansible-playbook
<!--- Paste example playbooks or commands between quotes below -->
```command
$ ansible-playbook -i hosts site.yml --diff --check
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
ok: [192.168.0.196]
TASK [template] ****************************************************************************************************************************************************************************************************
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-6716163hjzxdi/tmp9ta05f8n/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196]
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
```yaml
$ cat site.yml
- name: deploy by template module
hosts: all
tasks:
- template:
src: dummy.txt
dest: /root/dummy.txt
```
```dummy.txt
$ cat dummy.txt
hello
world
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Show only modified row.
```
TASK [template] ****************************************************************************************************************************************************************************************************
--- before: /root/dummy.txt
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-67303hkyg6c4n/tmpl13oaaoa/dummy.txt
@@ -1 +1,2 @@
hello
+world
changed: [192.168.0.196]
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook -i hosts site.yml --diff --check -vvvv
ansible-playbook 2.8.2
config file = /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg
configured module search path = ['/Users/mhimuro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /Users/mhimuro/devel/homebrew/bin/ansible-playbook
python version = 3.7.4 (default, Jul 12 2019, 09:36:09) [Clang 10.0.1 (clang-1001.0.46.4)]
Using /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
script declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
auto declined parsing /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts as it did not pass it's verify_file() method
Parsed /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: site.yml *************************************************************************************************************************************************************************************************
Positional arguments: site.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
check: True
diff: True
inventory: ('/Users/mhimuro/devel/project/xxx/mhimuro/diff-test/hosts',)
forks: 5
1 plays in site.yml
PLAY [deploy by template module] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:1
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" && echo ansible-tmp-1563877177.7595708-149449123721876="` echo /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877177.7595708-149449123721876=/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> Attempting python interpreter discovery
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.0.196> (0, b'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python\nENDFOUND\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.0.196> (0, b'{"osrelease_content": "NAME=\\"CentOS Linux\\"\\nVERSION=\\"7 (Core)\\"\\nID=\\"centos\\"\\nID_LIKE=\\"rhel fedora\\"\\nVERSION_ID=\\"7\\"\\nPRETTY_NAME=\\"CentOS Linux 7 (Core)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:centos:centos:7\\"\\nHOME_URL=\\"https://www.centos.org/\\"\\nBUG_REPORT_URL=\\"https://bugs.centos.org/\\"\\n\\nCENTOS_MANTISBT_PROJECT=\\"CentOS-7\\"\\nCENTOS_MANTISBT_PROJECT_VERSION=\\"7\\"\\nREDHAT_SUPPORT_PRODUCT=\\"centos\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7\\"\\n\\n", "platform_dist_result": ["centos", "7.6.1810", "Core"]}\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/system/setup.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 TO /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7 /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpe0yldgj7\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:8 O:131072 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:9 O:163840 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:10 O:196608 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:11 O:229376 S:23099\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 32768 bytes at 98304\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 8 32768 bytes at 131072\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 9 32768 bytes at 163840\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 10 32768 bytes at 196608\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 11 23099 bytes at 229376\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/AnsiballZ_setup.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.6", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "C", "LC_NUMERIC": "C", "TERM": "screen", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/0", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "PWD": "/root", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "XDG_SESSION_ID": "27910", "SSH_CLIENT": "192.168.0.116 53653 22", "LOGNAME": "root", "USER": "root", "MAIL": "/var/mail/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:", "HOME": "/root", "LC_ALL": "C", "_": "/usr/bin/python", "SSH_CONNECTION": "192.168.0.116 53653 192.168.0.196 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_vethcb884b7": {"macaddress": "8a:47:31:d5:d3:1e", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethcb884b7", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::8847:31ff:fed5:d31e"}], "active": true, "speed": 10000}, "ansible_default_ipv4": {"macaddress": "1c:87:2c:41:6e:ce", "network": "192.168.0.0", "mtu": 1500, "broadcast": "192.168.0.255", "alias": "enp4s0", "netmask": "255.255.255.0", "address": "192.168.0.196", "interface": "enp4s0", "type": "ether", "gateway": "192.168.0.3"}, "ansible_swapfree_mb": 7654, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": "centos/swap", "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_machine_id": "75e09accf1bb49fa8d70b2de021e00fb", "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "BD9ABFA8-FE30-2EE7-41D2-1C872C416ECE", "ansible_pkg_mgr": "yum", "ansible_vethc03bba1": {"macaddress": "1e:41:f1:a6:31:ff", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "vethc03bba1", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1c41:f1ff:fea6:31ff"}], "active": true, "speed": 10000}, "ansible_iscsi_iqn": "", "ansible_veth2726a80": {"macaddress": "4a:02:87:c6:94:14", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth2726a80", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::4802:87ff:fec6:9414"}], "active": true, "speed": 10000}, "ansible_all_ipv6_addresses": ["fe80::1e87:2cff:fe41:6ece", "fe80::fcb2:74ff:feb8:ed7a", "fe80::1c41:f1ff:fea6:31ff", "fe80::a0e7:6fff:fea1:a0d4", "fe80::4802:87ff:fec6:9414", "fe80::42:20ff:fe23:46cf", "fe80::8847:31ff:fed5:d31e"], "ansible_uptime_seconds": 6684898, "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "System Serial Number", "ansible_form_factor": "Desktop", "ansible_distribution_file_parsed": true, "ansible_fips": false, "ansible_user_id": "root", "ansible_selinux_python_present": true, "ansible_local": {}, "ansible_processor_vcpus": 8, "ansible_docker0": {"macaddress": "02:42:20:23:46:cf", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [requested on]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "on", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "off [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [requested on]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "interfaces": ["veth2726a80", "vethc03bba1", "veth5af9b4f", "veth27f581f", "vethcb884b7"], "id": "8000.0242202346cf", "mtu": 1500, "device": "docker0", "promisc": false, "stp": false, "ipv4": {"broadcast": "172.17.255.255", "netmask": "255.255.0.0", "network": "172.17.0.0", "address": "172.17.0.1"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::42:20ff:fe23:46cf"}], "active": true, "timestamping": ["rx_software", "software"], "type": "bridge", "hw_timestamp_filters": []}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "1", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "2", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "3", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "4", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "5", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "6", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz", "7", "GenuineIntel", "Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAdVLDGWn3TP6UxDh2EOhbblOwKh9nc8rDSSYZ33sc9SQIPhmYsGGnP62cC5Fm4uVe14lBF0Thf8IZCMIYpuLY=", "ansible_user_gid": 0, "ansible_system_vendor": "ASUS", "ansible_swaptotal_mb": 7935, "ansible_distribution_major_version": "7", "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDyDmjz8qJ/JvPuvlZi3qiIT1vBBciJiargJHLs8ccywFNcVbJiXj/3fibFoE2VISKLtcYtPvxAzMnKeowdPc5BmmTmdKyyvSMTxmbX25lhb9t0LhkFeIUXbhy+j9Wvj6/d39Yuh2zUbIqI5YR/qpssEUeh2z/eROm/jN0lj1TSnhcYxDAe04GvXGBfDCCz1lDW/rX1/JgBIdRYGUyB57BbeS3FlvFxz7NfzBEdAdr+Dvv/oxTd4aoteqx1+Z8pNVKYkDw1nbjMFcZDF9u/uANvwh3p0qw4Nfve5Sit/zkDdkdC+DkpnnR5W+M2O1o7Iyq90AafS4xCqzYG6MDR+Jv/", "ansible_user_gecos": "root", "ansible_processor_threads_per_core": 2, "ansible_system": "Linux", "ansible_all_ipv4_addresses": ["192.168.0.196", "172.17.0.1"], "ansible_python_version": "2.7.5", "ansible_product_version": "System Version", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7690, "used": 7491, "free": 199}, "swap": {"cached": 16, "total": 7935, "free": 7654, "used": 281}, "nocache": {"used": 6807, "free": 883}}, "ansible_user_dir": "/root", "gather_subset": ["all"], "ansible_real_user_id": 0, "ansible_virtualization_role": "host", "ansible_dns": {"nameservers": ["192.168.0.5", "192.168.0.16"], "search": ["work"]}, "ansible_effective_group_id": 0, "ansible_enp4s0": {"macaddress": "1c:87:2c:41:6e:ce", "features": {"tx_checksum_ipv4": "off", "generic_receive_offload": "on", "tx_checksum_ipv6": "off", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off", "highdma": "on [fixed]", "rx_fcs": "off", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_checksumming": "off", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "off", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "off", "rx_checksumming": "on", "tx_tcp_segmentation": "off", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "off [requested on]", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "type": "ether", "pciid": "0000:04:00.0", "module": "r8169", "mtu": 1500, "device": "enp4s0", "promisc": false, "timestamping": ["tx_software", "rx_software", "software"], "ipv4": {"broadcast": "192.168.0.255", "netmask": "255.255.255.0", "network": "192.168.0.0", "address": "192.168.0.196"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::1e87:2cff:fe41:6ece"}], "active": true, "speed": 1000, "hw_timestamp_filters": []}, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7690, "ansible_device_links": {"masters": {"loop1": ["dm-3"], "loop0": ["dm-3"], "sda2": ["dm-0", "dm-1", "dm-2"], "dm-3": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"]}, "labels": {}, "ids": {"sr0": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "sda2": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "sda": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "dm-8": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "sda1": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "dm-6": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "dm-7": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "dm-4": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "dm-5": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "dm-2": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "dm-0": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "dm-1": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"]}, "uuids": {"sda1": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"], "dm-2": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"], "dm-0": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"], "dm-8": ["48eda381-df74-4ad8-a63a-46c167bf1144"], "dm-1": ["13900741-5a75-44f6-8848-3325135493d0"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "ja_JP.UTF-8", "BOOT_IMAGE": "/vmlinuz-3.10.0-957.1.3.el7.x86_64", "quiet": true, "rhgb": true, "rd.lvm.lv": ["centos/root", "centos/swap"], "crashkernel": "auto", "ro": true, "root": "/dev/mapper/centos-root"}, "ansible_memfree_mb": 199, "ansible_processor_count": 1, "ansible_hostname": "intra", "ansible_interfaces": ["veth2726a80", "vethcb884b7", "docker0", "lo", "enp4s0", "vethc03bba1", "veth5af9b4f", "veth27f581f"], "ansible_selinux": {"status": "disabled"}, "ansible_fqdn": "ec2-52-213-25-113.eu-west-1.compute.amazonaws.com", "ansible_mounts": [{"block_used": 8256, "uuid": "80fe1d0c-c3c4-4442-a467-f2975fd87ba5", "size_total": 437290033152, "block_total": 106760262, "mount": "/home", "block_available": 106752006, "size_available": 437256216576, "fstype": "xfs", "inode_total": 427249664, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-home", "inode_used": 3, "block_size": 4096, "inode_available": 427249661}, {"block_used": 74602, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "size_total": 517713920, "block_total": 126395, "mount": "/boot", "block_available": 51793, "size_available": 212144128, "fstype": "xfs", "inode_total": 512000, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/sda1", "inode_used": 361, "block_size": 4096, "inode_available": 511639}, {"block_used": 10291, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "block_available": 2608333, "size_available": 10683731968, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "inode_used": 519, "block_size": 4096, "inode_available": 10484217}, {"block_used": 7070823, "uuid": "ac012a2a-a7f8-425b-911a-9197e611fbfe", "size_total": 53660876800, "block_total": 13100800, "mount": "/", "block_available": 6029977, "size_available": 24698785792, "fstype": "xfs", "inode_total": 52428800, "options": "rw,relatime,attr2,inode64,noquota", "device": "/dev/mapper/centos-root", "inode_used": 146375, "block_size": 4096, "inode_available": 52282425}, {"block_used": 79838, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "block_available": 2538786, "size_available": 10398867456, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "inode_used": 14836, "block_size": 4096, "inode_available": 10469900}, {"block_used": 334234, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "block_available": 2284390, "size_available": 9356861440, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013", "inode_used": 79705, "block_size": 4096, "inode_available": 10405031}, {"block_used": 42443, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "block_available": 2576181, "size_available": 10552037376, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "inode_used": 7156, "block_size": 4096, "inode_available": 10477580}, {"block_used": 322515, "uuid": "48eda381-df74-4ad8-a63a-46c167bf1144", "size_total": 10725883904, "block_total": 2618624, "mount": "/var/lib/docker/devicemapper/mnt/401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "block_available": 2296109, "size_available": 9404862464, "fstype": "xfs", "inode_total": 10484736, "options": "rw,relatime,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota", "device": "/dev/mapper/docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "inode_used": 79368, "block_size": 4096, "inode_available": 10405368}], "ansible_nodename": "intra.work", "ansible_lvm": {"pvs": {"/dev/sda2": {"free_g": "0.06", "size_g": "465.27", "vg": "centos"}}, "lvs": {"home": {"size_g": "407.46", "vg": "centos"}, "root": {"size_g": "50.00", "vg": "centos"}, "swap": {"size_g": "7.75", "vg": "centos"}}, "vgs": {"centos": {"free_g": "0.06", "size_g": "465.27", "num_lvs": "3", "num_pvs": "1"}}}, "ansible_domain": "eu-west-1.compute.amazonaws.com", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIEzv1iG3Mak/xFq6KbljB8M4YaTfHo/ZiskvcC9Kz7kV", "ansible_processor_cores": 4, "ansible_bios_version": "2201", "ansible_date_time": {"weekday_number": "2", "iso8601_basic_short": "20190723T191938", "tz": "JST", "weeknumber": "29", "hour": "19", "year": "2019", "minute": "19", "tz_offset": "+0900", "month": "07", "epoch": "1563877178", "iso8601_micro": "2019-07-23T10:19:38.711920Z", "weekday": "\\u706b\\u66dc\\u65e5", "time": "19:19:38", "date": "2019-07-23", "iso8601": "2019-07-23T10:19:38Z", "day": "23", "iso8601_basic": "20190723T191938711851", "second": "38"}, "ansible_distribution_release": "Core", "ansible_os_family": "RedHat", "ansible_effective_user_id": 0, "ansible_veth27f581f": {"macaddress": "fe:b2:74:b8:ed:7a", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth27f581f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::fcb2:74ff:feb8:ed7a"}], "active": true, "speed": 10000}, "ansible_product_name": "All Series", "ansible_devices": {"dm-5": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-3": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-4", "dm-5", "dm-6", "dm-7", "dm-8"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410", "docker-253:0-83120-a4623e9c7c34ce0ad5421a699470b3ecdb8dbcaa95b22d0e8be7dd7a730d17ff", "docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28", "docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf", "docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "size": "100.00 GB"}, "sr0": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ASUS", "sectors": "2097151", "links": {"masters": [], "labels": [], "ids": ["ata-ASUS_DRW-24F1ST_a_S10M68EG2002AW"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "1", "support_discard": "0", "model": "DRW-24F1ST a", "partitions": {}, "holders": [], "size": "1024.00 MB"}, "sda": {"scheduler_mode": "deadline", "rotational": "1", "vendor": "ATA", "sectors": "976773168", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS", "wwn-0x5000039fe0c30158"], "uuids": []}, "partitions": {"sda2": {"sectorsize": 512, "uuid": null, "links": {"masters": ["dm-0", "dm-1", "dm-2"], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part2", "lvm-pv-uuid-Q78tme-t9AP-o3G0-fAs8-27fr-QioB-m5PGHc", "wwn-0x5000039fe0c30158-part2"], "uuids": []}, "sectors": "975747072", "start": "1026048", "holders": ["centos-root", "centos-swap", "centos-home"], "size": "465.27 GB"}, "sda1": {"sectorsize": 512, "uuid": "7d6d535a-6728-4c83-8ab2-40cb45b64e7d", "links": {"masters": [], "labels": [], "ids": ["ata-TOSHIBA_DT01ABA050V_45I6LY9KS-part1", "wwn-0x5000039fe0c30158-part1"], "uuids": ["7d6d535a-6728-4c83-8ab2-40cb45b64e7d"]}, "sectors": "1024000", "start": "2048", "holders": [], "size": "500.00 MB"}}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "TOSHIBA DT01ABA0", "wwn": "0x5000039fe0c30158", "holders": [], "size": "465.76 GB"}, "dm-8": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-6c1737562dcc6565452cd5d9ede5233096d5c71f6a3662c480a42fe564238013"], "uuids": ["48eda381-df74-4ad8-a63a-46c167bf1144"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-6": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-243e38f2f03754a47a972b3fc1050c92917039ea7a1c42c3d8c09c5bc626aa28"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-7": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-401522c5a3d61e22752813bd371d8367853734d43e4746474387cfc5bf727dbf"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "loop1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "4194304", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "2.00 GB"}, "loop0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "209715200", "links": {"masters": ["dm-3"], "labels": [], "ids": [], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "4096", "model": null, "partitions": {}, "holders": ["docker-253:0-83120-pool"], "size": "100.00 GB"}, "dm-2": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "854499328", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-home", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSMaaSDIGwxchQRoUyL1ntnMPT6KOAriTU"], "uuids": ["80fe1d0c-c3c4-4442-a467-f2975fd87ba5"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "407.46 GB"}, "dm-4": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "20971520", "links": {"masters": [], "labels": [], "ids": ["dm-name-docker-253:0-83120-247c996ac26c8de4c9da53d463981a78e5285f5a3be2b3a8d9d646aa5d43f410"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "65536", "model": null, "partitions": {}, "holders": [], "size": "10.00 GB"}, "dm-0": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "104857600", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-root", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnSUq3u89edCoYmtN3lATh7xy5GMZr5Pgo7"], "uuids": ["ac012a2a-a7f8-425b-911a-9197e611fbfe"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "50.00 GB"}, "dm-1": {"scheduler_mode": "", "rotational": "1", "vendor": null, "sectors": "16252928", "links": {"masters": [], "labels": [], "ids": ["dm-name-centos-swap", "dm-uuid-LVM-4fsf4CYFxxeQbCloNqEl3syUei7nCOnS5bxciorzpAwg9QYL7sMS1PoWUcb0IiXV"], "uuids": ["13900741-5a75-44f6-8848-3325135493d0"]}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": null, "partitions": {}, "holders": [], "size": "7.75 GB"}}, "ansible_user_uid": 0, "ansible_bios_date": "11/26/2014", "ansible_distribution": "CentOS", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_veth5af9b4f": {"macaddress": "a2:e7:6f:a1:a0:d4", "features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on", "tx_vlan_stag_hw_insert": "on", "rx_vlan_stag_hw_parse": "on", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "on", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "on", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "on", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "on", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "on", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "on", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "on"}, "type": "ether", "hw_timestamp_filters": [], "mtu": 1500, "device": "veth5af9b4f", "promisc": true, "timestamping": ["rx_software", "software"], "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::a0e7:6fff:fea1:a0d4"}], "active": true, "speed": 10000}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877177.7595708-149449123721876/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [192.168.0.196]
META: ran handlers
TASK [template] ****************************************************************************************************************************************************************************************************
task path: /Users/mhimuro/devel/project/xxx/mhimuro/diff-test/site.yml:4
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
<192.168.0.196> (0, b'/root\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" && echo ansible-tmp-1563877178.840123-58444523661802="` echo /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802 `" ) && sleep 0'"'"''
<192.168.0.196> (0, b'ansible-tmp-1563877178.840123-58444523661802=/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/stat.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329 /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpl89nf329\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:8068\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 8068 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_stat.py && sleep 0'"'"''
<192.168.0.196> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/root/dummy.txt", "get_md5": null, "get_mime": true, "get_attributes": true}}, "stat": {"charset": "us-ascii", "uid": 0, "exists": true, "attr_flags": "", "woth": false, "isreg": true, "device_type": 0, "mtime": 1563871385.1164613, "block_size": 4096, "inode": 201591273, "isgid": false, "size": 6, "executable": false, "isuid": false, "readable": true, "version": "889946615", "pw_name": "root", "gid": 0, "ischr": false, "wusr": true, "writeable": true, "mimetype": "text/plain", "blocks": 8, "xoth": false, "islnk": false, "nlink": 1, "issock": false, "rgrp": true, "gr_name": "root", "path": "/root/dummy.txt", "xusr": false, "atime": 1563871388.064355, "isdir": false, "ctime": 1563871385.1404603, "isblk": false, "wgrp": false, "checksum": "9591818c07e900db7e1e0bc4b884c945e6a61b24", "dev": 64768, "roth": true, "isfifo": false, "mode": "0644", "xgrp": false, "rusr": true, "attributes": []}, "changed": false}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 192.168.0.196 closed.\r\n')
Using module file /Users/mhimuro/devel/homebrew/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible/modules/files/file.py
<192.168.0.196> PUT /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g TO /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py
<192.168.0.196> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 '[192.168.0.196]'
<192.168.0.196> (0, b'sftp> put /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /root size 0\r\ndebug3: Looking up /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmprpshex0g\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:12803\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 12803 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 -tt 192.168.0.196 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/AnsiballZ_file.py && sleep 0'"'"''
<192.168.0.196> (1, b'\r\n{"msg": "argument _diff_peek is of type <type \'bool\'> and we were unable to convert to str: Quote the entire value to ensure it does not change.", "failed": true, "exception": "WARNING: The below traceback may *not* be related to the actual failure.\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1780, in _check_argument_types\\n param[k] = type_checker(value)\\n File \\"/tmp/ansible_file_payload_eqV_MW/ansible_file_payload.zip/ansible/module_utils/basic.py\\", line 1631, in _check_type_str\\n raise TypeError(to_native(msg))\\n", "invocation": {"module_args": {"force": false, "recurse": false, "access_time_format": "%Y%m%d%H%M.%S", "_diff_peek": true, "modification_time_format": "%Y%m%d%H%M.%S", "path": "/root/dummy.txt", "follow": true}}}\r\n', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.0.196 closed.\r\n')
<192.168.0.196> Failed to connect to the host via ssh: OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/mhimuro/.ssh/config
debug1: /Users/mhimuro/.ssh/config line 25: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: /etc/ssh/ssh_config line 52: Applying options for *
debug2: resolve_canonicalize: hostname 192.168.0.196 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 66322
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.0.196 closed.
<192.168.0.196> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.0.196> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/mhimuro/.ansible/cp/82c206faa0 192.168.0.196 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1563877178.840123-58444523661802/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.0.196> (0, b'', b'OpenSSH_7.9p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /Users/mhimuro/.ssh/config\r\ndebug1: /Users/mhimuro/.ssh/config line 25: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 52: Applying options for *\r\ndebug2: resolve_canonicalize: hostname 192.168.0.196 is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 66322\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
--- before
+++ after: /Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt
@@ -0,0 +1,2 @@
+hello
+world
changed: [192.168.0.196] => {
"changed": true,
"diff": [
{
"after": "hello\nworld\n",
"after_header": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt",
"before": ""
}
],
"invocation": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"module_args": {
"dest": "/root/dummy.txt",
"follow": false,
"mode": null,
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
},
"src": "/Users/mhimuro/.ansible/tmp/ansible-local-66383fj98ak6_/tmpjpkkggw8/dummy.txt"
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************
192.168.0.196 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$
```
|
https://github.com/ansible/ansible/issues/59433
|
https://github.com/ansible/ansible/pull/60428
|
9a51dff0b17f01bcb280a438ecfe785e5fda4541
|
9b7198d25ecf084b6a465ba445efd426022265c3
| 2019-07-23T10:39:00Z |
python
| 2020-01-17T21:02:28Z |
test/integration/targets/file/tasks/main.yml
|
# Test code for the file module.
# (c) 2014, Richard Isaacson <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact: output_file={{output_dir}}/foo.txt
# same as expanduser & expandvars called on managed host
- command: 'echo {{ output_file }}'
register: echo
- set_fact:
remote_file_expanded: '{{ echo.stdout }}'
# Include the tests
- name: Run tests for state=link
include: state_link.yml
- name: Run tests for directory as dest
include: directory_as_dest.yml
- name: Run tests for unicode
include: unicode_path.yml
environment:
LC_ALL: C
LANG: C
- name: decide to include or not include selinux tests
include: selinux_tests.yml
when: selinux_installed is defined and selinux_installed.stdout != "" and selinux_enabled.stdout != "Disabled"
- name: Initialize the test output dir
include: initialize.yml
# These tests need to be organized by state parameter into separate files later
- name: verify that we are checking a file and it is present
file: path={{output_file}} state=file
register: file_result
- name: verify that the file was marked as changed
assert:
that:
- "file_result.changed == false"
- "file_result.state == 'file'"
- name: verify that we are checking an absent file
file: path={{output_dir}}/bar.txt state=absent
register: file2_result
- name: verify that the file was marked as changed
assert:
that:
- "file2_result.changed == false"
- "file2_result.state == 'absent'"
- name: verify we can touch a file
file: path={{output_dir}}/baz.txt state=touch
register: file3_result
- name: verify that the file was marked as changed
assert:
that:
- "file3_result.changed == true"
- "file3_result.state == 'file'"
- "file3_result.mode == '0644'"
- name: change file mode
file: path={{output_dir}}/baz.txt mode=0600
register: file4_result
- name: verify that the file was marked as changed
assert:
that:
- "file4_result.changed == true"
- "file4_result.mode == '0600'"
- name: explicitly set file attribute "A"
file: path={{output_dir}}/baz.txt attributes=A
register: file_attributes_result
ignore_errors: True
- name: add file attribute "A"
file: path={{output_dir}}/baz.txt attributes=+A
register: file_attributes_result_2
when: file_attributes_result is changed
- name: verify that the file was not marked as changed
assert:
that:
- "file_attributes_result_2 is not changed"
when: file_attributes_result is changed
- name: remove file attribute "A"
file: path={{output_dir}}/baz.txt attributes=-A
register: file_attributes_result_3
ignore_errors: True
- name: explicitly remove file attributes
file: path={{output_dir}}/baz.txt attributes=""
register: file_attributes_result_4
when: file_attributes_result_3 is changed
- name: verify that the file was not marked as changed
assert:
that:
- "file_attributes_result_4 is not changed"
when: file_attributes_result_4 is changed
- name: change ownership and group
file: path={{output_dir}}/baz.txt owner=1234 group=1234
- name: Get stat info to check atime later
stat: path={{output_dir}}/baz.txt
register: file_attributes_result_5_before
- name: updates access time
file: path={{output_dir}}/baz.txt access_time=now
register: file_attributes_result_5
- name: Get stat info to check atime later
stat: path={{output_dir}}/baz.txt
register: file_attributes_result_5_after
- name: verify that the file was marked as changed and atime changed
assert:
that:
- "file_attributes_result_5 is changed"
- "file_attributes_result_5_after['stat']['atime'] != file_attributes_result_5_before['stat']['atime']"
- name: setup a tmp-like directory for ownership test
file: path=/tmp/worldwritable mode=1777 state=directory
- name: Ask to create a file without enough perms to change ownership
file: path=/tmp/worldwritable/baz.txt state=touch owner=root
become: yes
become_user: nobody
register: chown_result
ignore_errors: True
- name: Ask whether the new file exists
stat: path=/tmp/worldwritable/baz.txt
register: file_exists_result
- name: Verify that the file doesn't exist on failure
assert:
that:
- "chown_result.failed == True"
- "file_exists_result.stat.exists == False"
- name: clean up
file: path=/tmp/worldwritable state=absent
- name: create hard link to file
file: src={{output_file}} dest={{output_dir}}/hard.txt state=hard
register: file6_result
- name: verify that the file was marked as changed
assert:
that:
- "file6_result.changed == true"
- name: touch a hard link
file:
dest: '{{ output_dir }}/hard.txt'
state: 'touch'
register: file6_touch_result
- name: verify that the hard link was touched
assert:
that:
- "file6_touch_result.changed == true"
- name: stat1
stat: path={{output_file}}
register: hlstat1
- name: stat2
stat: path={{output_dir}}/hard.txt
register: hlstat2
- name: verify that hard link is still the same after timestamp updated
assert:
that:
- "hlstat1.stat.inode == hlstat2.stat.inode"
- name: create hard link to file 2
file: src={{output_file}} dest={{output_dir}}/hard.txt state=hard
register: hlink_result
- name: verify that hard link creation is idempotent
assert:
that:
- "hlink_result.changed == False"
- name: Change mode on a hard link
file: src={{output_file}} dest={{output_dir}}/hard.txt mode=0701
register: file6_mode_change
- name: verify that the hard link was touched
assert:
that:
- "file6_touch_result.changed == true"
- name: stat1
stat: path={{output_file}}
register: hlstat1
- name: stat2
stat: path={{output_dir}}/hard.txt
register: hlstat2
- name: verify that hard link is still the same after timestamp updated
assert:
that:
- "hlstat1.stat.inode == hlstat2.stat.inode"
- "hlstat1.stat.mode == '0701'"
- name: create a directory
file: path={{output_dir}}/foobar state=directory
register: file7_result
- name: verify that the file was marked as changed
assert:
that:
- "file7_result.changed == true"
- "file7_result.state == 'directory'"
- name: determine if selinux is installed
shell: which getenforce || exit 0
register: selinux_installed
- name: determine if selinux is enabled
shell: getenforce
register: selinux_enabled
when: selinux_installed.stdout != ""
ignore_errors: true
- name: remove directory foobar
file: path={{output_dir}}/foobar state=absent
- name: remove file foo.txt
file: path={{output_dir}}/foo.txt state=absent
- name: remove file bar.txt
file: path={{output_dir}}/foo.txt state=absent
- name: remove file baz.txt
file: path={{output_dir}}/foo.txt state=absent
- name: copy directory structure over
copy: src=foobar dest={{output_dir}}
- name: check what would be removed if folder state was absent and diff is enabled
file:
path: "{{ item }}"
state: absent
check_mode: yes
diff: yes
with_items:
- "{{ output_dir }}"
- "{{ output_dir }}/foobar/fileA"
register: folder_absent_result
- name: 'assert that the "absent" state lists expected files and folders for only directories'
assert:
that:
- folder_absent_result.results[0].diff.before.path_content is defined
- folder_absent_result.results[1].diff.before.path_content is not defined
- test_folder in folder_absent_result.results[0].diff.before.path_content.directories
- test_file in folder_absent_result.results[0].diff.before.path_content.files
vars:
test_folder: "{{ folder_absent_result.results[0].path }}/foobar"
test_file: "{{ folder_absent_result.results[0].path }}/foobar/fileA"
- name: Change ownership of a directory with recurse=no(default)
file: path={{output_dir}}/foobar owner=1234
- name: verify that the permission of the directory was set
file: path={{output_dir}}/foobar state=directory
register: file8_result
- name: assert that the directory has changed to have owner 1234
assert:
that:
- "file8_result.uid == 1234"
- name: verify that the permission of a file under the directory was not set
file: path={{output_dir}}/foobar/fileA state=file
register: file9_result
- name: assert the file owner has not changed to 1234
assert:
that:
- "file9_result.uid != 1234"
- name: change the ownership of a directory with recurse=yes
file: path={{output_dir}}/foobar owner=1235 recurse=yes
- name: verify that the permission of the directory was set
file: path={{output_dir}}/foobar state=directory
register: file10_result
- name: assert that the directory has changed to have owner 1235
assert:
that:
- "file10_result.uid == 1235"
- name: verify that the permission of a file under the directory was not set
file: path={{output_dir}}/foobar/fileA state=file
register: file11_result
- name: assert that the file has changed to have owner 1235
assert:
that:
- "file11_result.uid == 1235"
- name: remove directory foobar
file: path={{output_dir}}/foobar state=absent
register: file14_result
- name: verify that the directory was removed
assert:
that:
- 'file14_result.changed == true'
- 'file14_result.state == "absent"'
- name: create a test sub-directory
file: dest={{output_dir}}/sub1 state=directory
register: file15_result
- name: verify that the new directory was created
assert:
that:
- 'file15_result.changed == true'
- 'file15_result.state == "directory"'
- name: create test files in the sub-directory
file: dest={{output_dir}}/sub1/{{item}} state=touch
with_items:
- file1
- file2
- file3
register: file16_result
- name: verify the files were created
assert:
that:
- 'item.changed == true'
- 'item.state == "file"'
with_items: "{{file16_result.results}}"
- name: test file creation with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=u=rwx,g=rwx,o=rwx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0777'
- name: modify symbolic mode for all
file: dest={{output_dir}}/test_symbolic state=touch mode=a=r
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: modify symbolic mode for owner
file: dest={{output_dir}}/test_symbolic state=touch mode=u+w
register: result
- name: assert file mode
assert:
that:
- result.mode == '0644'
- name: modify symbolic mode for group
file: dest={{output_dir}}/test_symbolic state=touch mode=g+w
register: result
- name: assert file mode
assert:
that:
- result.mode == '0664'
- name: modify symbolic mode for world
file: dest={{output_dir}}/test_symbolic state=touch mode=o+w
register: result
- name: assert file mode
assert:
that:
- result.mode == '0666'
- name: modify symbolic mode for owner
file: dest={{output_dir}}/test_symbolic state=touch mode=u+x
register: result
- name: assert file mode
assert:
that:
- result.mode == '0766'
- name: modify symbolic mode for group
file: dest={{output_dir}}/test_symbolic state=touch mode=g+x
register: result
- name: assert file mode
assert:
that:
- result.mode == '0776'
- name: modify symbolic mode for world
file: dest={{output_dir}}/test_symbolic state=touch mode=o+x
register: result
- name: assert file mode
assert:
that:
- result.mode == '0777'
- name: remove symbolic mode for world
file: dest={{output_dir}}/test_symbolic state=touch mode=o-wx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0774'
- name: remove symbolic mode for group
file: dest={{output_dir}}/test_symbolic state=touch mode=g-wx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0744'
- name: remove symbolic mode for owner
file: dest={{output_dir}}/test_symbolic state=touch mode=u-wx
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: set sticky bit with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=o+t
register: result
- name: assert file mode
assert:
that:
- result.mode == '01444'
- name: remove sticky bit with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=o-t
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: add setgid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=g+s
register: result
- name: assert file mode
assert:
that:
- result.mode == '02444'
- name: remove setgid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=g-s
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
- name: add setuid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=u+s
register: result
- name: assert file mode
assert:
that:
- result.mode == '04444'
- name: remove setuid with symbolic mode
file: dest={{output_dir}}/test_symbolic state=touch mode=u-s
register: result
- name: assert file mode
assert:
that:
- result.mode == '0444'
# https://github.com/ansible/ansible/issues/50943
# Need to use /tmp as nobody can't access output_dir at all
- name: create file as root with all write permissions
file: dest=/tmp/write_utime state=touch mode=0666 owner={{ansible_user_id}}
- name: Pause to ensure stat times are not the exact same
pause:
seconds: 1
- block:
- name: get previous time
stat: path=/tmp/write_utime
register: previous_time
- name: pause for 1 second to ensure the next touch is newer
pause: seconds=1
- name: touch file as nobody
file: dest=/tmp/write_utime state=touch
become: True
become_user: nobody
register: result
- name: get new time
stat: path=/tmp/write_utime
register: current_time
always:
- name: remove test utime file
file: path=/tmp/write_utime state=absent
- name: assert touch file as nobody
assert:
that:
- result is changed
- current_time.stat.atime > previous_time.stat.atime
- current_time.stat.mtime > previous_time.stat.mtime
# Follow + recursive tests
- name: create a toplevel directory
file: path={{output_dir}}/test_follow_rec state=directory mode=0755
- name: create a file outside of the toplevel
file: path={{output_dir}}/test_follow_rec_target_file state=touch mode=0700
- name: create a directory outside of the toplevel
file: path={{output_dir}}/test_follow_rec_target_dir state=directory mode=0700
- name: create a file inside of the link target directory
file: path={{output_dir}}/test_follow_rec_target_dir/foo state=touch mode=0700
- name: create a symlink to the file
file: path={{output_dir}}/test_follow_rec/test_link state=link src="../test_follow_rec_target_file"
- name: create a symlink to the directory
file: path={{output_dir}}/test_follow_rec/test_link_dir state=link src="../test_follow_rec_target_dir"
- name: create a symlink to a nonexistent file
file: path={{output_dir}}/test_follow_rec/nonexistent state=link src=does_not_exist force=True
- name: try to change permissions without following symlinks
file: path={{output_dir}}/test_follow_rec follow=False mode="a-x" recurse=True
- name: stat the link file target
stat: path={{output_dir}}/test_follow_rec_target_file
register: file_result
- name: stat the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir
register: dir_result
- name: stat the file inside the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir/foo
register: file_in_dir_result
- name: assert that the link targets were unmodified
assert:
that:
- file_result.stat.mode == '0700'
- dir_result.stat.mode == '0700'
- file_in_dir_result.stat.mode == '0700'
- name: try to change permissions with following symlinks
file: path={{output_dir}}/test_follow_rec follow=True mode="a-x" recurse=True
- name: stat the link file target
stat: path={{output_dir}}/test_follow_rec_target_file
register: file_result
- name: stat the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir
register: dir_result
- name: stat the file inside the link dir target
stat: path={{output_dir}}/test_follow_rec_target_dir/foo
register: file_in_dir_result
- name: assert that the link targets were modified
assert:
that:
- file_result.stat.mode == '0600'
- dir_result.stat.mode == '0600'
- file_in_dir_result.stat.mode == '0600'
# https://github.com/ansible/ansible/issues/55971
- name: Test missing src and path
file:
state: hard
register: file_error1
ignore_errors: yes
- assert:
that:
- "file_error1 is failed"
- "file_error1.msg == 'missing required arguments: path'"
- name: Test missing src
file:
dest: "{{ output_dir }}/hard.txt"
state: hard
register: file_error2
ignore_errors: yes
- assert:
that:
- "file_error2 is failed"
- "file_error2.msg == 'src is required for creating new hardlinks'"
- name: Test non-existing src
file:
src: non-existing-file-that-does-not-exist.txt
dest: "{{ output_dir }}/hard.txt"
state: hard
register: file_error3
ignore_errors: yes
- assert:
that:
- "file_error3 is failed"
- "file_error3.msg == 'src does not exist'"
- "file_error3.dest == '{{ output_dir }}/hard.txt' | expanduser"
- "file_error3.src == 'non-existing-file-that-does-not-exist.txt'"
- block:
- name: Create a testing file
file:
dest: original_file.txt
state: touch
- name: Test relative path with state=hard
file:
src: original_file.txt
dest: hard_link_file.txt
state: hard
register: hard_link_relpath
- name: Just check if it was successful, we don't care about the actual hard link in this test
assert:
that:
- "hard_link_relpath is success"
always:
- name: Clean up
file:
path: "{{ item }}"
state: absent
loop:
- original_file.txt
- hard_link_file.txt
# END #55971
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,263 |
ansible + podman connector not working
|
##### SUMMARY
When using ansible with the podman connector (rootless container as user), I am not able to execute any commands.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
podman connector plugin
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = ['/var/home/dschier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible
executable location = /var/home/dschier/.venv-python3/bin/ansible
python version = 3.7.5 (default, Dec 15 2019, 17:54:26) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
- Fedora 31
- Podman 1.6.2
- Ansible 2.9.2 (via pip / virtualenv)
##### STEPS TO REPRODUCE
1. Start a container as user
```
podman run -d --rm --name instance fedora:31 sleep 300
```
2. Execute some ansible
```
ansible instance -c podman -m setup -i inventory
```
##### EXPECTED RESULTS
```
ansible gathers the facts
```
##### ACTUAL RESULTS
```
ansible 2.9.2
config file = None
configured module search path = ['/var/home/dschier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible
executable location = /var/home/dschier/.venv-python3/bin/ansible
python version = 3.7.5 (default, Dec 15 2019, 17:54:26) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /var/home/dschier/Projects/while-true-do/ansible-testing/inventory as it did not pass its verify_file() method
script declined parsing /var/home/dschier/Projects/while-true-do/ansible-testing/inventory as it did not pass its verify_file() method
auto declined parsing /var/home/dschier/Projects/while-true-do/ansible-testing/inventory as it did not pass its verify_file() method
Parsed /var/home/dschier/Projects/while-true-do/ansible-testing/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible/plugins/callback/minimal.py
META: ran handlers
<instance> RUN [b'podman', b'mount', b'instance']
Failed to mount container instance: b'Error: cannot mount using driver overlay in rootless mode'
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b'echo ~ && sleep 0']
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551 `" && echo ansible-tmp-1578473717.9823112-75096948351551="` echo /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551 `" ) && sleep 0']
<instance> Attempting python interpreter discovery
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b"echo PLATFORM; uname; echo FOUND; command -v '/usr/bin/python'; command -v 'python3.7'; command -v 'python3.6'; command -v 'python3.5'; command -v 'python2.7'; command -v 'python2.6'; command -v '/usr/libexec/platform-python'; command -v '/usr/bin/python3'; command -v 'python'; echo ENDFOUND && sleep 0"]
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b'/usr/bin/python3.7 && sleep 0']
[WARNING]: Unhandled error in Python interpreter discovery for host instance: Expecting value: line 1 column 1 (char 0)
Using module file /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible/modules/system/setup.py
<instance> PUT /var/home/dschier/.ansible/tmp/ansible-local-232076i4n4mlw/tmpmbdk1qd0 TO /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551/AnsiballZ_setup.py
<instance> RUN [b'podman', b'cp', b'/var/home/dschier/.ansible/tmp/ansible-local-232076i4n4mlw/tmpmbdk1qd0', b'instance:/root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551/AnsiballZ_setup.py']
instance | FAILED! => {
"msg": "Failed to copy file from /var/home/dschier/.ansible/tmp/ansible-local-232076i4n4mlw/tmpmbdk1qd0 to /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551/AnsiballZ_setup.py in container instance\nb'Error: cannot copy into running rootless container with pause set - pass --pause=false to force copying\\n'"
}
```
|
https://github.com/ansible/ansible/issues/66263
|
https://github.com/ansible/ansible/pull/66583
|
7ae53312187ad036971faeea1fd08d77a89aa67f
|
077a8b489852ceb7fc3ca6b00d52d369f46256a7
| 2020-01-08T08:55:30Z |
python
| 2020-01-17T21:41:10Z |
changelogs/fragments/66263-podman-connection-no-pause-rootless.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,263 |
ansible + podman connector not working
|
##### SUMMARY
When using ansible with the podman connector (rootless container as user), I am not able to execute any commands.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
podman connector plugin
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = None
configured module search path = ['/var/home/dschier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible
executable location = /var/home/dschier/.venv-python3/bin/ansible
python version = 3.7.5 (default, Dec 15 2019, 17:54:26) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
- Fedora 31
- Podman 1.6.2
- Ansible 2.9.2 (via pip / virtualenv)
##### STEPS TO REPRODUCE
1. Start a container as user
```
podman run -d --rm --name instance fedora:31 sleep 300
```
2. Execute some ansible
```
ansible instance -c podman -m setup -i inventory
```
##### EXPECTED RESULTS
```
ansible gathers the facts
```
##### ACTUAL RESULTS
```
ansible 2.9.2
config file = None
configured module search path = ['/var/home/dschier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible
executable location = /var/home/dschier/.venv-python3/bin/ansible
python version = 3.7.5 (default, Dec 15 2019, 17:54:26) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /var/home/dschier/Projects/while-true-do/ansible-testing/inventory as it did not pass its verify_file() method
script declined parsing /var/home/dschier/Projects/while-true-do/ansible-testing/inventory as it did not pass its verify_file() method
auto declined parsing /var/home/dschier/Projects/while-true-do/ansible-testing/inventory as it did not pass its verify_file() method
Parsed /var/home/dschier/Projects/while-true-do/ansible-testing/inventory inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible/plugins/callback/minimal.py
META: ran handlers
<instance> RUN [b'podman', b'mount', b'instance']
Failed to mount container instance: b'Error: cannot mount using driver overlay in rootless mode'
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b'echo ~ && sleep 0']
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551 `" && echo ansible-tmp-1578473717.9823112-75096948351551="` echo /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551 `" ) && sleep 0']
<instance> Attempting python interpreter discovery
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b"echo PLATFORM; uname; echo FOUND; command -v '/usr/bin/python'; command -v 'python3.7'; command -v 'python3.6'; command -v 'python3.5'; command -v 'python2.7'; command -v 'python2.6'; command -v '/usr/libexec/platform-python'; command -v '/usr/bin/python3'; command -v 'python'; echo ENDFOUND && sleep 0"]
<instance> RUN [b'podman', b'exec', b'instance', b'/bin/sh', b'-c', b'/usr/bin/python3.7 && sleep 0']
[WARNING]: Unhandled error in Python interpreter discovery for host instance: Expecting value: line 1 column 1 (char 0)
Using module file /var/home/dschier/.venv-python3/lib64/python3.7/site-packages/ansible/modules/system/setup.py
<instance> PUT /var/home/dschier/.ansible/tmp/ansible-local-232076i4n4mlw/tmpmbdk1qd0 TO /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551/AnsiballZ_setup.py
<instance> RUN [b'podman', b'cp', b'/var/home/dschier/.ansible/tmp/ansible-local-232076i4n4mlw/tmpmbdk1qd0', b'instance:/root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551/AnsiballZ_setup.py']
instance | FAILED! => {
"msg": "Failed to copy file from /var/home/dschier/.ansible/tmp/ansible-local-232076i4n4mlw/tmpmbdk1qd0 to /root/.ansible/tmp/ansible-tmp-1578473717.9823112-75096948351551/AnsiballZ_setup.py in container instance\nb'Error: cannot copy into running rootless container with pause set - pass --pause=false to force copying\\n'"
}
```
|
https://github.com/ansible/ansible/issues/66263
|
https://github.com/ansible/ansible/pull/66583
|
7ae53312187ad036971faeea1fd08d77a89aa67f
|
077a8b489852ceb7fc3ca6b00d52d369f46256a7
| 2020-01-08T08:55:30Z |
python
| 2020-01-17T21:41:10Z |
lib/ansible/plugins/connection/podman.py
|
# Based on the buildah connection plugin
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
# Connection plugin to interact with existing podman containers.
# https://github.com/containers/libpod
#
# Written by: Tomas Tomecek (https://github.com/TomasTomecek)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import distutils.spawn
import shlex
import shutil
import subprocess
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_native
from ansible.plugins.connection import ConnectionBase, ensure_connect
from ansible.utils.display import Display
display = Display()
DOCUMENTATION = """
author: Tomas Tomecek ([email protected])
connection: podman
short_description: Interact with an existing podman container
description:
- Run commands or put/fetch files to an existing container using podman tool.
version_added: 2.8
options:
remote_addr:
description:
- The ID of the container you want to access.
default: inventory_hostname
vars:
- name: ansible_host
remote_user:
description:
- User specified via name or UID which is used to execute commands inside the container. If you
specify the user via UID, you must set C(ANSIBLE_REMOTE_TMP) to a path that exits
inside the container and is writable by Ansible.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
podman_extra_args:
description:
- Extra arguments to pass to the podman command line.
default: ''
ini:
- section: defaults
key: podman_extra_args
vars:
- name: ansible_podman_extra_args
env:
- name: ANSIBLE_PODMAN_EXTRA_ARGS
podman_executable:
description:
- Executable for podman command.
default: podman
vars:
- name: ansible_podman_executable
env:
- name: ANSIBLE_PODMAN_EXECUTABLE
"""
# this _has to be_ named Connection
class Connection(ConnectionBase):
"""
This is a connection plugin for podman. It uses podman binary to interact with the containers
"""
# String used to identify this Connection class from other classes
transport = 'podman'
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._container_id = self._play_context.remote_addr
self._connected = False
# container filesystem will be mounted here on host
self._mount_point = None
self.user = self._play_context.remote_user
def _podman(self, cmd, cmd_args=None, in_data=None, use_container_id=True):
"""
run podman executable
:param cmd: podman's command to execute (str)
:param cmd_args: list of arguments to pass to the command (list of str/bytes)
:param in_data: data passed to podman's stdin
:return: return code, stdout, stderr
"""
podman_exec = self.get_option('podman_executable')
podman_cmd = distutils.spawn.find_executable(podman_exec)
if not podman_cmd:
raise AnsibleError("%s command not found in PATH" % podman_exec)
local_cmd = [podman_cmd]
if self.get_option('podman_extra_args'):
local_cmd += shlex.split(
to_native(
self.get_option('podman_extra_args'),
errors='surrogate_or_strict'))
local_cmd.append(cmd)
if use_container_id:
local_cmd.append(self._container_id)
if cmd_args:
local_cmd += cmd_args
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
display.vvv("RUN %s" % (local_cmd,), host=self._container_id)
p = subprocess.Popen(local_cmd, shell=False, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate(input=in_data)
display.vvvvv("STDOUT %s" % stdout)
display.vvvvv("STDERR %s" % stderr)
display.vvvvv("RC CODE %s" % p.returncode)
stdout = to_bytes(stdout, errors='surrogate_or_strict')
stderr = to_bytes(stderr, errors='surrogate_or_strict')
return p.returncode, stdout, stderr
def _connect(self):
"""
no persistent connection is being maintained, mount container's filesystem
so we can easily access it
"""
super(Connection, self)._connect()
rc, self._mount_point, stderr = self._podman("mount")
if rc != 0:
display.v("Failed to mount container %s: %s" % (self._container_id, stderr.strip()))
else:
self._mount_point = self._mount_point.strip()
display.vvvvv("MOUNTPOINT %s RC %s STDERR %r" % (self._mount_point, rc, stderr))
self._connected = True
@ensure_connect
def exec_command(self, cmd, in_data=None, sudoable=False):
""" run specified command in a running OCI container using podman """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
# shlex.split has a bug with text strings on Python-2.6 and can only handle text strings on Python-3
cmd_args_list = shlex.split(to_native(cmd, errors='surrogate_or_strict'))
if self.user:
cmd_args_list += ["--user", self.user]
rc, stdout, stderr = self._podman("exec", cmd_args_list, in_data)
display.vvvvv("STDOUT %r STDERR %r" % (stderr, stderr))
return rc, stdout, stderr
def put_file(self, in_path, out_path):
""" Place a local file located in 'in_path' inside container at 'out_path' """
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._container_id)
if not self._mount_point:
rc, stdout, stderr = self._podman(
"cp", [in_path, self._container_id + ":" + out_path], use_container_id=False)
if rc != 0:
raise AnsibleError("Failed to copy file from %s to %s in container %s\n%s" % (
in_path, out_path, self._container_id, stderr))
else:
real_out_path = self._mount_point + to_bytes(out_path, errors='surrogate_or_strict')
shutil.copyfile(
to_bytes(in_path, errors='surrogate_or_strict'),
to_bytes(real_out_path, errors='surrogate_or_strict')
)
def fetch_file(self, in_path, out_path):
""" obtain file specified via 'in_path' from the container and place it at 'out_path' """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._container_id)
if not self._mount_point:
rc, stdout, stderr = self._podman(
"cp", [self._container_id + ":" + in_path, out_path], use_container_id=False)
if rc != 0:
raise AnsibleError("Failed to fetch file from %s to %s from container %s\n%s" % (
in_path, out_path, self._container_id, stderr))
else:
real_in_path = self._mount_point + to_bytes(in_path, errors='surrogate_or_strict')
shutil.copyfile(
to_bytes(real_in_path, errors='surrogate_or_strict'),
to_bytes(out_path, errors='surrogate_or_strict')
)
def close(self):
""" unmount container's filesystem """
super(Connection, self).close()
# we actually don't need to unmount since the container is mounted anyway
# rc, stdout, stderr = self._podman("umount")
# display.vvvvv("RC %s STDOUT %r STDERR %r" % (rc, stdout, stderr))
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,546 |
setup_docker integration test failing
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The `setup_docker` integration test role is failing on CentOS 8 due to missing packages. This seems to be intermittent. I cannot duplicate it locally and the packages seem to be available (at least in a test container I'm running locally).
[Example failed job](https://app.shippable.com/github/ansible/ansible/runs/156205/96/console)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/setup_docker`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Default
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ansible-test integration --docker centos8 docker_container -vvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass
##### ACTUAL RESULTS
```
TASK [setup_docker : Install Docker pre-reqs] **********************************
task path: /root/ansible/test/results/.tmp/integration/docker_container-5ud2taj7-ÅÑŚÌβŁÈ/test/integration/targets/setup_docker/tasks/RedHat-8.yml:4
...
fatal: [testhost]: FAILED! => {
"changed": false,
"failures": [
"No package dnf-utils available.",
"No package device-mapper-persistent-data available.",
"No package lvm2 available."
],
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"dnf-utils",
"device-mapper-persistent-data",
"lvm2",
"libseccomp"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Failed to install some of the specified packages",
"rc": 1,
"results": []
}
```
|
https://github.com/ansible/ansible/issues/66546
|
https://github.com/ansible/ansible/pull/66572
|
dd68458da2f38bab0d713c2ef58fcc2bebd98029
|
f15050b09eb8060f4a7c5863630fed4a39a0c57c
| 2020-01-16T19:55:34Z |
python
| 2020-01-18T04:41:42Z |
test/integration/targets/setup_docker/tasks/RedHat-8.yml
|
# The RHEL extras repository must be enabled to provide the container-selinux package.
# See: https://docs.docker.com/engine/installation/linux/docker-ee/rhel/#install-using-the-repository
- name: Install Docker pre-reqs
dnf:
name: "{{ docker_prereq_packages }}"
state: present
notify: cleanup docker
- name: Set-up repository
command: dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
args:
warn: no
- name: Install docker
dnf:
name: "{{ docker_packages }}"
state: present
notify: cleanup docker
- name: Make sure the docker daemon is running (failure expected inside docker container)
service:
name: docker
state: started
ignore_errors: "{{ ansible_virtualization_type == 'docker' }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,546 |
setup_docker integration test failing
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The `setup_docker` integration test role is failing on CentOS 8 due to missing packages. This seems to be intermittent. I cannot duplicate it locally and the packages seem to be available (at least in a test container I'm running locally).
[Example failed job](https://app.shippable.com/github/ansible/ansible/runs/156205/96/console)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/setup_docker`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Default
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable CentOS 8
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
ansible-test integration --docker centos8 docker_container -vvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass
##### ACTUAL RESULTS
```
TASK [setup_docker : Install Docker pre-reqs] **********************************
task path: /root/ansible/test/results/.tmp/integration/docker_container-5ud2taj7-ÅÑŚÌβŁÈ/test/integration/targets/setup_docker/tasks/RedHat-8.yml:4
...
fatal: [testhost]: FAILED! => {
"changed": false,
"failures": [
"No package dnf-utils available.",
"No package device-mapper-persistent-data available.",
"No package lvm2 available."
],
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"dnf-utils",
"device-mapper-persistent-data",
"lvm2",
"libseccomp"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Failed to install some of the specified packages",
"rc": 1,
"results": []
}
```
|
https://github.com/ansible/ansible/issues/66546
|
https://github.com/ansible/ansible/pull/66572
|
dd68458da2f38bab0d713c2ef58fcc2bebd98029
|
f15050b09eb8060f4a7c5863630fed4a39a0c57c
| 2020-01-16T19:55:34Z |
python
| 2020-01-18T04:41:42Z |
test/integration/targets/setup_docker/vars/RedHat-8.yml
|
docker_prereq_packages:
- dnf-utils
- device-mapper-persistent-data
- lvm2
- libseccomp
# Docker CE > 3:18.09.1 requires containerd.io >= 1.2.2-3 which is unavaible at this time
docker_packages:
- docker-ce-3:18.09.1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,609 |
maven_repository: Maven Central now requires `https` and 501 on `http`
|
##### SUMMARY
As of 2019-01-15 Maven Central now requires `https`.
https://support.sonatype.com/hc/en-us/articles/360041287334
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* maven_repository
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
It's in the build system. Full out not relevant.
ANSIBLE_VERSION=2.8.5
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Not relevant
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc.
Ubuntu 18.04 CIS
-->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Attempt to get an artifact from Maven Central using `http` instead of `https`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Download latest agent from maven central
maven_artifact:
group_id: com.contrastsecurity
artifact_id: contrast-agent
version: "{{ contrast_agent_version }}"
dest: "{{ contrast_agent_path }}/contrast/contrast.jar"
owner: tomcat
group: tomcat
mode: 0644
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expected the artifact to be copied to the AMI as it has done for years before this :-)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
amazon-ebs: TASK [contrast : Download latest agent from maven central] *********************
amazon-ebs: fatal: [teamserver-intuit-1579469195]: FAILED! => {"changed": false, "msg": "Failed to download artifact com.contrastsecurity:contrast-agent:3.6.9.11819 because of HTTP Error 501: HTTPS Requiredfor URL http://repo1.maven.org/maven2/com/contrastsecurity/contrast-agent/3.6.9.11819/contrast-agent-3.6.9.11819.jar"}
amazon-ebs:
http://repo1.maven.org/maven2/com/contrastsecurity/contrast-agent/3.6.9.11819/contrast-agent-3.6.9.11819.jar
<!--- Paste verbatim command output between quotes -->
```paste below
Not sure what command you are referring to?
```
|
https://github.com/ansible/ansible/issues/66609
|
https://github.com/ansible/ansible/pull/66611
|
3bf8b1d1c99cf5354cb7687d4a8669a144f3f90d
|
7129453cd96b40b527a14b16cfcd7fee6d342ca2
| 2020-01-19T22:29:10Z |
python
| 2020-01-20T05:46:49Z |
lib/ansible/modules/packaging/language/maven_artifact.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Chris Schmidt <chris.schmidt () contrastsecurity.com>
#
# Built using https://github.com/hamnis/useful-scripts/blob/master/python/download-maven-artifact
# as a reference and starting point.
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: maven_artifact
short_description: Downloads an Artifact from a Maven Repository
version_added: "2.0"
description:
- Downloads an artifact from a maven repository given the maven coordinates provided to the module.
- Can retrieve snapshots or release versions of the artifact and will resolve the latest available
version if one is not available.
author: "Chris Schmidt (@chrisisbeef)"
requirements:
- lxml
- boto if using a S3 repository (s3://...)
options:
group_id:
description:
- The Maven groupId coordinate
required: true
artifact_id:
description:
- The maven artifactId coordinate
required: true
version:
description:
- The maven version coordinate
- Mutually exclusive with I(version_by_spec).
version_by_spec:
description:
- The maven dependency version ranges.
- See supported version ranges on U(https://cwiki.apache.org/confluence/display/MAVENOLD/Dependency+Mediation+and+Conflict+Resolution)
- The range type "(,1.0],[1.2,)" and "(,1.1),(1.1,)" is not supported.
- Mutually exclusive with I(version).
version_added: "2.10"
classifier:
description:
- The maven classifier coordinate
extension:
description:
- The maven type/extension coordinate
default: jar
repository_url:
description:
- The URL of the Maven Repository to download from.
- Use s3://... if the repository is hosted on Amazon S3, added in version 2.2.
- Use file://... if the repository is local, added in version 2.6
default: http://repo1.maven.org/maven2
username:
description:
- The username to authenticate as to the Maven Repository. Use AWS secret key of the repository is hosted on S3
aliases: [ "aws_secret_key" ]
password:
description:
- The password to authenticate with to the Maven Repository. Use AWS secret access key of the repository is hosted on S3
aliases: [ "aws_secret_access_key" ]
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
type: dict
version_added: "2.8"
force_basic_auth:
version_added: "2.10"
description:
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail. This option forces the sending of the Basic authentication header
upon initial request.
default: 'no'
type: bool
dest:
description:
- The path where the artifact should be written to
- If file mode or ownerships are specified and destination path already exists, they affect the downloaded file
required: true
state:
description:
- The desired state of the artifact
default: present
choices: [present,absent]
timeout:
description:
- Specifies a timeout in seconds for the connection attempt
default: 10
version_added: "2.3"
validate_certs:
description:
- If C(no), SSL certificates will not be validated. This should only be set to C(no) when no other option exists.
type: bool
default: 'yes'
version_added: "1.9.3"
keep_name:
description:
- If C(yes), the downloaded artifact's name is preserved, i.e the version number remains part of it.
- This option only has effect when C(dest) is a directory and C(version) is set to C(latest) or C(version_by_spec)
is defined.
type: bool
default: 'no'
version_added: "2.4"
verify_checksum:
description:
- If C(never), the md5 checksum will never be downloaded and verified.
- If C(download), the md5 checksum will be downloaded and verified only after artifact download. This is the default.
- If C(change), the md5 checksum will be downloaded and verified if the destination already exist,
to verify if they are identical. This was the behaviour before 2.6. Since it downloads the md5 before (maybe)
downloading the artifact, and since some repository software, when acting as a proxy/cache, return a 404 error
if the artifact has not been cached yet, it may fail unexpectedly.
If you still need it, you should consider using C(always) instead - if you deal with a checksum, it is better to
use it to verify integrity after download.
- C(always) combines C(download) and C(change).
required: false
default: 'download'
choices: ['never', 'download', 'change', 'always']
version_added: "2.6"
extends_documentation_fragment:
- files
'''
EXAMPLES = '''
# Download the latest version of the JUnit framework artifact from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
# Download JUnit 4.11 from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
version: 4.11
dest: /tmp/junit-4.11.jar
# Download an artifact from a private repository requiring authentication
- maven_artifact:
group_id: com.company
artifact_id: library-name
repository_url: 'https://repo.company.com/maven'
username: user
password: pass
dest: /tmp/library-name-latest.jar
# Download a WAR File to the Tomcat webapps directory to be deployed
- maven_artifact:
group_id: com.company
artifact_id: web-app
extension: war
repository_url: 'https://repo.company.com/maven'
dest: /var/lib/tomcat7/webapps/web-app.war
# Keep a downloaded artifact's name, i.e. retain the version
- maven_artifact:
version: latest
artifact_id: spring-core
group_id: org.springframework
dest: /tmp/
keep_name: yes
# Download the latest version of the JUnit framework artifact from Maven local
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
repository_url: "file://{{ lookup('env','HOME') }}/.m2/repository"
# Download the latest version between 3.8 and 4.0 (exclusive) of the JUnit framework artifact from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
version_by_spec: "[3.8,4.0)"
dest: /tmp/
'''
import hashlib
import os
import posixpath
import shutil
import io
import tempfile
import traceback
from ansible.module_utils.ansible_release import __version__ as ansible_version
from re import match
LXML_ETREE_IMP_ERR = None
try:
from lxml import etree
HAS_LXML_ETREE = True
except ImportError:
LXML_ETREE_IMP_ERR = traceback.format_exc()
HAS_LXML_ETREE = False
BOTO_IMP_ERR = None
try:
import boto3
HAS_BOTO = True
except ImportError:
BOTO_IMP_ERR = traceback.format_exc()
HAS_BOTO = False
SEMANTIC_VERSION_IMP_ERR = None
try:
from semantic_version import Version, Spec
HAS_SEMANTIC_VERSION = True
except ImportError:
SEMANTIC_VERSION_IMP_ERR = traceback.format_exc()
HAS_SEMANTIC_VERSION = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.urls import fetch_url
from ansible.module_utils._text import to_bytes, to_native, to_text
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if not os.path.exists(b_head):
if head == dirname:
return None, [head]
else:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return head, [tail]
new_directory_list.append(tail)
return pre_existing_dir, new_directory_list
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
first_sub_dir = new_directory_list.pop(0)
if not pre_existing_dir:
working_dir = first_sub_dir
else:
working_dir = os.path.join(pre_existing_dir, first_sub_dir)
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
class Artifact(object):
def __init__(self, group_id, artifact_id, version, version_by_spec, classifier='', extension='jar'):
if not group_id:
raise ValueError("group_id must be set")
if not artifact_id:
raise ValueError("artifact_id must be set")
self.group_id = group_id
self.artifact_id = artifact_id
self.version = version
self.version_by_spec = version_by_spec
self.classifier = classifier
if not extension:
self.extension = "jar"
else:
self.extension = extension
def is_snapshot(self):
return self.version and self.version.endswith("SNAPSHOT")
def path(self, with_version=True):
base = posixpath.join(self.group_id.replace(".", "/"), self.artifact_id)
if with_version and self.version:
base = posixpath.join(base, self.version)
return base
def _generate_filename(self):
filename = self.artifact_id + "-" + self.classifier + "." + self.extension
if not self.classifier:
filename = self.artifact_id + "." + self.extension
return filename
def get_filename(self, filename=None):
if not filename:
filename = self._generate_filename()
elif os.path.isdir(filename):
filename = os.path.join(filename, self._generate_filename())
return filename
def __str__(self):
result = "%s:%s:%s" % (self.group_id, self.artifact_id, self.version)
if self.classifier:
result = "%s:%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.classifier, self.version)
elif self.extension != "jar":
result = "%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.version)
return result
@staticmethod
def parse(input):
parts = input.split(":")
if len(parts) >= 3:
g = parts[0]
a = parts[1]
v = parts[len(parts) - 1]
t = None
c = None
if len(parts) == 4:
t = parts[2]
if len(parts) == 5:
t = parts[2]
c = parts[3]
return Artifact(g, a, v, c, t)
else:
return None
class MavenDownloader:
def __init__(self, module, base, local=False, headers=None):
self.module = module
if base.endswith("/"):
base = base.rstrip("/")
self.base = base
self.local = local
self.headers = headers
self.user_agent = "Ansible {0} maven_artifact".format(ansible_version)
self.latest_version_found = None
self.metadata_file_name = "maven-metadata-local.xml" if local else "maven-metadata.xml"
def find_version_by_spec(self, artifact):
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
original_versions = xml.xpath("/metadata/versioning/versions/version/text()")
versions = []
for version in original_versions:
try:
versions.append(Version.coerce(version))
except ValueError:
# This means that version string is not a valid semantic versioning
pass
parse_versions_syntax = {
# example -> (,1.0]
r"^\(,(?P<upper_bound>[0-9.]*)]$": "<={upper_bound}",
# example -> 1.0
r"^(?P<version>[0-9.]*)$": "~={version}",
# example -> [1.0]
r"^\[(?P<version>[0-9.]*)\]$": "=={version}",
# example -> [1.2, 1.3]
r"^\[(?P<lower_bound>[0-9.]*),\s*(?P<upper_bound>[0-9.]*)\]$": ">={lower_bound},<={upper_bound}",
# example -> [1.2, 1.3)
r"^\[(?P<lower_bound>[0-9.]*),\s*(?P<upper_bound>[0-9.]+)\)$": ">={lower_bound},<{upper_bound}",
# example -> [1.5,)
r"^\[(?P<lower_bound>[0-9.]*),\)$": ">={lower_bound}",
}
for regex, spec_format in parse_versions_syntax.items():
regex_result = match(regex, artifact.version_by_spec)
if regex_result:
spec = Spec(spec_format.format(**regex_result.groupdict()))
selected_version = spec.select(versions)
if not selected_version:
raise ValueError("No version found with this spec version: {0}".format(artifact.version_by_spec))
# To deal when repos on maven don't have patch number on first build (e.g. 3.8 instead of 3.8.0)
if str(selected_version) not in original_versions:
selected_version.patch = None
return str(selected_version)
raise ValueError("The spec version {0} is not supported! ".format(artifact.version_by_spec))
def find_latest_version_available(self, artifact):
if self.latest_version_found:
return self.latest_version_found
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
v = xml.xpath("/metadata/versioning/versions/version[last()]/text()")
if v:
self.latest_version_found = v[0]
return v[0]
def find_uri_for_artifact(self, artifact):
if artifact.version_by_spec:
artifact.version = self.find_version_by_spec(artifact)
if artifact.version == "latest":
artifact.version = self.find_latest_version_available(artifact)
if artifact.is_snapshot():
if self.local:
return self._uri_for_artifact(artifact, artifact.version)
path = "/%s/%s" % (artifact.path(), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
for snapshotArtifact in xml.xpath("/metadata/versioning/snapshotVersions/snapshotVersion"):
classifier = snapshotArtifact.xpath("classifier/text()")
artifact_classifier = classifier[0] if classifier else ''
extension = snapshotArtifact.xpath("extension/text()")
artifact_extension = extension[0] if extension else ''
if artifact_classifier == artifact.classifier and artifact_extension == artifact.extension:
return self._uri_for_artifact(artifact, snapshotArtifact.xpath("value/text()")[0])
timestamp_xmlpath = xml.xpath("/metadata/versioning/snapshot/timestamp/text()")
if timestamp_xmlpath:
timestamp = timestamp_xmlpath[0]
build_number = xml.xpath("/metadata/versioning/snapshot/buildNumber/text()")[0]
return self._uri_for_artifact(artifact, artifact.version.replace("SNAPSHOT", timestamp + "-" + build_number))
return self._uri_for_artifact(artifact, artifact.version)
def _uri_for_artifact(self, artifact, version=None):
if artifact.is_snapshot() and not version:
raise ValueError("Expected uniqueversion for snapshot artifact " + str(artifact))
elif not artifact.is_snapshot():
version = artifact.version
if artifact.classifier:
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "-" + artifact.classifier + "." + artifact.extension)
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "." + artifact.extension)
# for small files, directly get the full content
def _getContent(self, url, failmsg, force=True):
if self.local:
parsed_url = urlparse(url)
if os.path.isfile(parsed_url.path):
with io.open(parsed_url.path, 'rb') as f:
return f.read()
if force:
raise ValueError(failmsg + " because can not find file: " + url)
return None
response = self._request(url, failmsg, force)
if response:
return response.read()
return None
# only for HTTP request
def _request(self, url, failmsg, force=True):
url_to_use = url
parsed_url = urlparse(url)
if parsed_url.scheme == 's3':
parsed_url = urlparse(url)
bucket_name = parsed_url.netloc
key_name = parsed_url.path[1:]
client = boto3.client('s3', aws_access_key_id=self.module.params.get('username', ''), aws_secret_access_key=self.module.params.get('password', ''))
url_to_use = client.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': key_name}, ExpiresIn=10)
req_timeout = self.module.params.get('timeout')
# Hack to add parameters in the way that fetch_url expects
self.module.params['url_username'] = self.module.params.get('username', '')
self.module.params['url_password'] = self.module.params.get('password', '')
self.module.params['http_agent'] = self.user_agent
response, info = fetch_url(self.module, url_to_use, timeout=req_timeout, headers=self.headers)
if info['status'] == 200:
return response
if force:
raise ValueError(failmsg + " because of " + info['msg'] + "for URL " + url_to_use)
return None
def download(self, tmpdir, artifact, verify_download, filename=None):
if (not artifact.version and not artifact.version_by_spec) or artifact.version == "latest":
artifact = Artifact(artifact.group_id, artifact.artifact_id, self.find_latest_version_available(artifact), None,
artifact.classifier, artifact.extension)
url = self.find_uri_for_artifact(artifact)
tempfd, tempname = tempfile.mkstemp(dir=tmpdir)
try:
# copy to temp file
if self.local:
parsed_url = urlparse(url)
if os.path.isfile(parsed_url.path):
shutil.copy2(parsed_url.path, tempname)
else:
return "Can not find local file: " + parsed_url.path
else:
response = self._request(url, "Failed to download artifact " + str(artifact))
with os.fdopen(tempfd, 'wb') as f:
shutil.copyfileobj(response, f)
if verify_download:
invalid_md5 = self.is_invalid_md5(tempname, url)
if invalid_md5:
# if verify_change was set, the previous file would be deleted
os.remove(tempname)
return invalid_md5
except Exception as e:
os.remove(tempname)
raise e
# all good, now copy temp file to target
shutil.move(tempname, artifact.get_filename(filename))
return None
def is_invalid_md5(self, file, remote_url):
if os.path.exists(file):
local_md5 = self._local_md5(file)
if self.local:
parsed_url = urlparse(remote_url)
remote_md5 = self._local_md5(parsed_url.path)
else:
try:
remote_md5 = to_text(self._getContent(remote_url + '.md5', "Failed to retrieve MD5", False), errors='strict')
except UnicodeError as e:
return "Cannot retrieve a valid md5 from %s: %s" % (remote_url, to_native(e))
if(not remote_md5):
return "Cannot find md5 from " + remote_url
try:
# Check if remote md5 only contains md5 or md5 + filename
_remote_md5 = remote_md5.split(None)[0]
remote_md5 = _remote_md5
# remote_md5 is empty so we continue and keep original md5 string
# This should not happen since we check for remote_md5 before
except IndexError as e:
pass
if local_md5 == remote_md5:
return None
else:
return "Checksum does not match: we computed " + local_md5 + "but the repository states " + remote_md5
return "Path does not exist: " + file
def _local_md5(self, file):
md5 = hashlib.md5()
with io.open(file, 'rb') as f:
for chunk in iter(lambda: f.read(8192), b''):
md5.update(chunk)
return md5.hexdigest()
def main():
module = AnsibleModule(
argument_spec=dict(
group_id=dict(required=True),
artifact_id=dict(required=True),
version=dict(default=None),
version_by_spec=dict(default=None),
classifier=dict(default=''),
extension=dict(default='jar'),
repository_url=dict(default='http://repo1.maven.org/maven2'),
username=dict(default=None, aliases=['aws_secret_key']),
password=dict(default=None, no_log=True, aliases=['aws_secret_access_key']),
headers=dict(type='dict'),
force_basic_auth=dict(default=False, type='bool'),
state=dict(default="present", choices=["present", "absent"]), # TODO - Implement a "latest" state
timeout=dict(default=10, type='int'),
dest=dict(type="path", required=True),
validate_certs=dict(required=False, default=True, type='bool'),
keep_name=dict(required=False, default=False, type='bool'),
verify_checksum=dict(required=False, default='download', choices=['never', 'download', 'change', 'always'])
),
add_file_common_args=True,
mutually_exclusive=([('version', 'version_by_spec')])
)
if not HAS_LXML_ETREE:
module.fail_json(msg=missing_required_lib('lxml'), exception=LXML_ETREE_IMP_ERR)
if module.params['version_by_spec'] and not HAS_SEMANTIC_VERSION:
module.fail_json(msg=missing_required_lib('semantic_version'), exception=SEMANTIC_VERSION_IMP_ERR)
repository_url = module.params["repository_url"]
if not repository_url:
repository_url = "http://repo1.maven.org/maven2"
try:
parsed_url = urlparse(repository_url)
except AttributeError as e:
module.fail_json(msg='url parsing went wrong %s' % e)
local = parsed_url.scheme == "file"
if parsed_url.scheme == 's3' and not HAS_BOTO:
module.fail_json(msg=missing_required_lib('boto3', reason='when using s3:// repository URLs'),
exception=BOTO_IMP_ERR)
group_id = module.params["group_id"]
artifact_id = module.params["artifact_id"]
version = module.params["version"]
version_by_spec = module.params["version_by_spec"]
classifier = module.params["classifier"]
extension = module.params["extension"]
headers = module.params['headers']
state = module.params["state"]
dest = module.params["dest"]
b_dest = to_bytes(dest, errors='surrogate_or_strict')
keep_name = module.params["keep_name"]
verify_checksum = module.params["verify_checksum"]
verify_download = verify_checksum in ['download', 'always']
verify_change = verify_checksum in ['change', 'always']
downloader = MavenDownloader(module, repository_url, local, headers)
if not version_by_spec and not version:
version = "latest"
try:
artifact = Artifact(group_id, artifact_id, version, version_by_spec, classifier, extension)
except ValueError as e:
module.fail_json(msg=e.args[0])
changed = False
prev_state = "absent"
if dest.endswith(os.sep):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dest)
os.makedirs(b_dest)
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
changed = adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
version_part = version
if version == 'latest':
version_part = downloader.find_latest_version_available(artifact)
elif version_by_spec:
version_part = downloader.find_version_by_spec(artifact)
filename = "{artifact_id}{version_part}{classifier}.{extension}".format(
artifact_id=artifact_id,
version_part="-{0}".format(version_part) if keep_name else "",
classifier="-{0}".format(classifier) if classifier else "",
extension=extension
)
dest = posixpath.join(dest, filename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.lexists(b_dest) and ((not verify_change) or not downloader.is_invalid_md5(dest, downloader.find_uri_for_artifact(artifact))):
prev_state = "present"
if prev_state == "absent":
try:
download_error = downloader.download(module.tmpdir, artifact, verify_download, b_dest)
if download_error is None:
changed = True
else:
module.fail_json(msg="Cannot retrieve the artifact to destination: " + download_error)
except ValueError as e:
module.fail_json(msg=e.args[0])
module.params['dest'] = dest
file_args = module.load_file_common_arguments(module.params)
changed = module.set_fs_attributes_if_different(file_args, changed)
if changed:
module.exit_json(state=state, dest=dest, group_id=group_id, artifact_id=artifact_id, version=version, classifier=classifier,
extension=extension, repository_url=repository_url, changed=changed)
else:
module.exit_json(state=state, dest=dest, changed=changed)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,609 |
maven_repository: Maven Central now requires `https` and 501 on `http`
|
##### SUMMARY
As of 2019-01-15 Maven Central now requires `https`.
https://support.sonatype.com/hc/en-us/articles/360041287334
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* maven_repository
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
It's in the build system. Full out not relevant.
ANSIBLE_VERSION=2.8.5
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Not relevant
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc.
Ubuntu 18.04 CIS
-->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Attempt to get an artifact from Maven Central using `http` instead of `https`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Download latest agent from maven central
maven_artifact:
group_id: com.contrastsecurity
artifact_id: contrast-agent
version: "{{ contrast_agent_version }}"
dest: "{{ contrast_agent_path }}/contrast/contrast.jar"
owner: tomcat
group: tomcat
mode: 0644
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expected the artifact to be copied to the AMI as it has done for years before this :-)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
amazon-ebs: TASK [contrast : Download latest agent from maven central] *********************
amazon-ebs: fatal: [teamserver-intuit-1579469195]: FAILED! => {"changed": false, "msg": "Failed to download artifact com.contrastsecurity:contrast-agent:3.6.9.11819 because of HTTP Error 501: HTTPS Requiredfor URL http://repo1.maven.org/maven2/com/contrastsecurity/contrast-agent/3.6.9.11819/contrast-agent-3.6.9.11819.jar"}
amazon-ebs:
http://repo1.maven.org/maven2/com/contrastsecurity/contrast-agent/3.6.9.11819/contrast-agent-3.6.9.11819.jar
<!--- Paste verbatim command output between quotes -->
```paste below
Not sure what command you are referring to?
```
|
https://github.com/ansible/ansible/issues/66609
|
https://github.com/ansible/ansible/pull/66611
|
3bf8b1d1c99cf5354cb7687d4a8669a144f3f90d
|
7129453cd96b40b527a14b16cfcd7fee6d342ca2
| 2020-01-19T22:29:10Z |
python
| 2020-01-20T05:46:49Z |
test/units/modules/packaging/language/test_maven_artifact.py
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.modules.packaging.language import maven_artifact
from ansible.module_utils import basic
pytestmark = pytest.mark.usefixtures('patch_ansible_module')
maven_metadata_example = b"""<?xml version="1.0" encoding="UTF-8"?>
<metadata>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<versioning>
<latest>4.13-beta-2</latest>
<release>4.13-beta-2</release>
<versions>
<version>3.7</version>
<version>3.8</version>
<version>3.8.1</version>
<version>3.8.2</version>
<version>4.0</version>
<version>4.1</version>
<version>4.2</version>
<version>4.3</version>
<version>4.3.1</version>
<version>4.4</version>
<version>4.5</version>
<version>4.6</version>
<version>4.7</version>
<version>4.8</version>
<version>4.8.1</version>
<version>4.8.2</version>
<version>4.9</version>
<version>4.10</version>
<version>4.11-beta-1</version>
<version>4.11</version>
<version>4.12-beta-1</version>
<version>4.12-beta-2</version>
<version>4.12-beta-3</version>
<version>4.12</version>
<version>4.13-beta-1</version>
<version>4.13-beta-2</version>
</versions>
<lastUpdated>20190202141051</lastUpdated>
</versioning>
</metadata>
"""
@pytest.mark.parametrize('patch_ansible_module, version_by_spec, version_choosed', [
(None, "(,3.9]", "3.8.2"),
(None, "3.0", "3.8.2"),
(None, "[3.7]", "3.7"),
(None, "[4.10, 4.12]", "4.12"),
(None, "[4.10, 4.12)", "4.11"),
(None, "[2.0,)", "4.13-beta-2"),
])
def test_find_version_by_spec(mocker, version_by_spec, version_choosed):
_getContent = mocker.patch('ansible.modules.packaging.language.maven_artifact.MavenDownloader._getContent')
_getContent.return_value = maven_metadata_example
artifact = maven_artifact.Artifact("junit", "junit", None, version_by_spec, "jar")
mvn_downloader = maven_artifact.MavenDownloader(basic.AnsibleModule, "http://repo1.maven.org/maven2")
assert mvn_downloader.find_version_by_spec(artifact) == version_choosed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,868 |
reboot module hangs with OpenVZ hosts
|
##### SUMMARY
Currently there is some issue with the reboot module with OpenVZ hosts. The reboot command is executed correctly, all hosts reboots, but the Ansible reboot tasks hangs.
If exactly the same playbook is run on KVM or native host, it finish without any issues.
It should be hot fixed in 2.7+ branches.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
reboot module
##### ANSIBLE VERSION
2.7
##### CONFIGURATION
Debian 9.9 / Ansible 2.8
##### OS / ENVIRONMENT
Debian 9.9
##### STEPS TO REPRODUCE
Trigger this reboot handler on OpenVZ hosts
```
- name: Reboot system
reboot:
reboot_timeout: 1200
post_reboot_delay: 5
connect_timeout: 2
listen: handler_reboot
```
##### EXPECTED RESULTS
Reboot is executed and once the host is back online, task is marked as changed and the execution of the playbook continues.
##### ACTUAL RESULTS
```
...
RUNNING HANDLER [shared_handlers : Reboot system] ********
```
(and no more output, the Ansible playbook hangs)
|
https://github.com/ansible/ansible/issues/58868
|
https://github.com/ansible/ansible/pull/62680
|
617fbad7435703ee5bd628f1530818147ccb44d6
|
2b7393141fa29e607b43166a6bd8e2916cd2091f
| 2019-07-09T12:43:59Z |
python
| 2020-01-21T18:42:32Z |
changelogs/fragments/reboot-add-last-boot-time-parameter.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,868 |
reboot module hangs with OpenVZ hosts
|
##### SUMMARY
Currently there is some issue with the reboot module with OpenVZ hosts. The reboot command is executed correctly, all hosts reboots, but the Ansible reboot tasks hangs.
If exactly the same playbook is run on KVM or native host, it finish without any issues.
It should be hot fixed in 2.7+ branches.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
reboot module
##### ANSIBLE VERSION
2.7
##### CONFIGURATION
Debian 9.9 / Ansible 2.8
##### OS / ENVIRONMENT
Debian 9.9
##### STEPS TO REPRODUCE
Trigger this reboot handler on OpenVZ hosts
```
- name: Reboot system
reboot:
reboot_timeout: 1200
post_reboot_delay: 5
connect_timeout: 2
listen: handler_reboot
```
##### EXPECTED RESULTS
Reboot is executed and once the host is back online, task is marked as changed and the execution of the playbook continues.
##### ACTUAL RESULTS
```
...
RUNNING HANDLER [shared_handlers : Reboot system] ********
```
(and no more output, the Ansible playbook hangs)
|
https://github.com/ansible/ansible/issues/58868
|
https://github.com/ansible/ansible/pull/62680
|
617fbad7435703ee5bd628f1530818147ccb44d6
|
2b7393141fa29e607b43166a6bd8e2916cd2091f
| 2019-07-09T12:43:59Z |
python
| 2020-01-21T18:42:32Z |
lib/ansible/modules/system/reboot.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
module: reboot
short_description: Reboot a machine
notes:
- C(PATH) is ignored on the remote node when searching for the C(shutdown) command. Use C(search_paths)
to specify locations to search if the default paths do not work.
description:
- Reboot a machine, wait for it to go down, come back up, and respond to commands.
- For Windows targets, use the M(win_reboot) module instead.
version_added: "2.7"
options:
pre_reboot_delay:
description:
- Seconds to wait before reboot. Passed as a parameter to the reboot command.
- On Linux, macOS and OpenBSD, this is converted to minutes and rounded down. If less than 60, it will be set to 0.
- On Solaris and FreeBSD, this will be seconds.
type: int
default: 0
post_reboot_delay:
description:
- Seconds to wait after the reboot command was successful before attempting to validate the system rebooted successfully.
- This is useful if you want wait for something to settle despite your connection already working.
type: int
default: 0
reboot_timeout:
description:
- Maximum seconds to wait for machine to reboot and respond to a test command.
- This timeout is evaluated separately for both reboot verification and test command success so the
maximum execution time for the module is twice this amount.
type: int
default: 600
connect_timeout:
description:
- Maximum seconds to wait for a successful connection to the managed hosts before trying again.
- If unspecified, the default setting for the underlying connection plugin is used.
type: int
test_command:
description:
- Command to run on the rebooted host and expect success from to determine the machine is ready for
further tasks.
type: str
default: whoami
msg:
description:
- Message to display to users before reboot.
type: str
default: Reboot initiated by Ansible
search_paths:
description:
- Paths to search on the remote machine for the C(shutdown) command.
- I(Only) these paths will be searched for the C(shutdown) command. C(PATH) is ignored in the remote node when searching for the C(shutdown) command.
type: list
default: ['/sbin', '/usr/sbin', '/usr/local/sbin']
version_added: '2.8'
seealso:
- module: win_reboot
author:
- Matt Davis (@nitzmahone)
- Sam Doran (@samdoran)
'''
EXAMPLES = r'''
- name: Unconditionally reboot the machine with all defaults
reboot:
- name: Reboot a slow machine that might have lots of updates to apply
reboot:
reboot_timeout: 3600
- name: Reboot a machine with shutdown command in unusual place
reboot:
search_paths:
- '/lib/molly-guard'
'''
RETURN = r'''
rebooted:
description: true if the machine was rebooted
returned: always
type: bool
sample: true
elapsed:
description: The number of seconds that elapsed waiting for the system to be rebooted.
returned: always
type: int
sample: 23
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,868 |
reboot module hangs with OpenVZ hosts
|
##### SUMMARY
Currently there is some issue with the reboot module with OpenVZ hosts. The reboot command is executed correctly, all hosts reboots, but the Ansible reboot tasks hangs.
If exactly the same playbook is run on KVM or native host, it finish without any issues.
It should be hot fixed in 2.7+ branches.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
reboot module
##### ANSIBLE VERSION
2.7
##### CONFIGURATION
Debian 9.9 / Ansible 2.8
##### OS / ENVIRONMENT
Debian 9.9
##### STEPS TO REPRODUCE
Trigger this reboot handler on OpenVZ hosts
```
- name: Reboot system
reboot:
reboot_timeout: 1200
post_reboot_delay: 5
connect_timeout: 2
listen: handler_reboot
```
##### EXPECTED RESULTS
Reboot is executed and once the host is back online, task is marked as changed and the execution of the playbook continues.
##### ACTUAL RESULTS
```
...
RUNNING HANDLER [shared_handlers : Reboot system] ********
```
(and no more output, the Ansible playbook hangs)
|
https://github.com/ansible/ansible/issues/58868
|
https://github.com/ansible/ansible/pull/62680
|
617fbad7435703ee5bd628f1530818147ccb44d6
|
2b7393141fa29e607b43166a6bd8e2916cd2091f
| 2019-07-09T12:43:59Z |
python
| 2020-01-21T18:42:32Z |
lib/ansible/modules/windows/win_reboot.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_reboot
short_description: Reboot a windows machine
description:
- Reboot a Windows machine, wait for it to go down, come back up, and respond to commands.
- For non-Windows targets, use the M(reboot) module instead.
version_added: '2.1'
options:
pre_reboot_delay:
description:
- Seconds to wait before reboot. Passed as a parameter to the reboot command.
type: int
default: 2
aliases: [ pre_reboot_delay_sec ]
post_reboot_delay:
description:
- Seconds to wait after the reboot command was successful before attempting to validate the system rebooted successfully.
- This is useful if you want wait for something to settle despite your connection already working.
type: int
default: 0
version_added: '2.4'
aliases: [ post_reboot_delay_sec ]
shutdown_timeout:
description:
- Maximum seconds to wait for shutdown to occur.
- Increase this timeout for very slow hardware, large update applications, etc.
- This option has been removed since Ansible 2.5 as the win_reboot behavior has changed.
type: int
default: 600
aliases: [ shutdown_timeout_sec ]
reboot_timeout:
description:
- Maximum seconds to wait for machine to re-appear on the network and respond to a test command.
- This timeout is evaluated separately for both reboot verification and test command success so maximum clock time is actually twice this value.
type: int
default: 600
aliases: [ reboot_timeout_sec ]
connect_timeout:
description:
- Maximum seconds to wait for a single successful TCP connection to the WinRM endpoint before trying again.
type: int
default: 5
aliases: [ connect_timeout_sec ]
test_command:
description:
- Command to expect success for to determine the machine is ready for management.
type: str
default: whoami
msg:
description:
- Message to display to users.
type: str
default: Reboot initiated by Ansible
notes:
- If a shutdown was already scheduled on the system, C(win_reboot) will abort the scheduled shutdown and enforce its own shutdown.
- Beware that when C(win_reboot) returns, the Windows system may not have settled yet and some base services could be in limbo.
This can result in unexpected behavior. Check the examples for ways to mitigate this.
- The connection user must have the C(SeRemoteShutdownPrivilege) privilege enabled, see
U(https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/force-shutdown-from-a-remote-system)
for more information.
seealso:
- module: reboot
author:
- Matt Davis (@nitzmahone)
'''
EXAMPLES = r'''
- name: Reboot the machine with all defaults
win_reboot:
- name: Reboot a slow machine that might have lots of updates to apply
win_reboot:
reboot_timeout: 3600
# Install a Windows feature and reboot if necessary
- name: Install IIS Web-Server
win_feature:
name: Web-Server
register: iis_install
- name: Reboot when Web-Server feature requires it
win_reboot:
when: iis_install.reboot_required
# One way to ensure the system is reliable, is to set WinRM to a delayed startup
- name: Ensure WinRM starts when the system has settled and is ready to work reliably
win_service:
name: WinRM
start_mode: delayed
# Additionally, you can add a delay before running the next task
- name: Reboot a machine that takes time to settle after being booted
win_reboot:
post_reboot_delay: 120
# Or you can make win_reboot validate exactly what you need to work before running the next task
- name: Validate that the netlogon service has started, before running the next task
win_reboot:
test_command: 'exit (Get-Service -Name Netlogon).Status -ne "Running"'
'''
RETURN = r'''
rebooted:
description: True if the machine was rebooted.
returned: always
type: bool
sample: true
elapsed:
description: The number of seconds that elapsed waiting for the system to be rebooted.
returned: always
type: float
sample: 23.2
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,868 |
reboot module hangs with OpenVZ hosts
|
##### SUMMARY
Currently there is some issue with the reboot module with OpenVZ hosts. The reboot command is executed correctly, all hosts reboots, but the Ansible reboot tasks hangs.
If exactly the same playbook is run on KVM or native host, it finish without any issues.
It should be hot fixed in 2.7+ branches.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
reboot module
##### ANSIBLE VERSION
2.7
##### CONFIGURATION
Debian 9.9 / Ansible 2.8
##### OS / ENVIRONMENT
Debian 9.9
##### STEPS TO REPRODUCE
Trigger this reboot handler on OpenVZ hosts
```
- name: Reboot system
reboot:
reboot_timeout: 1200
post_reboot_delay: 5
connect_timeout: 2
listen: handler_reboot
```
##### EXPECTED RESULTS
Reboot is executed and once the host is back online, task is marked as changed and the execution of the playbook continues.
##### ACTUAL RESULTS
```
...
RUNNING HANDLER [shared_handlers : Reboot system] ********
```
(and no more output, the Ansible playbook hangs)
|
https://github.com/ansible/ansible/issues/58868
|
https://github.com/ansible/ansible/pull/62680
|
617fbad7435703ee5bd628f1530818147ccb44d6
|
2b7393141fa29e607b43166a6bd8e2916cd2091f
| 2019-07-09T12:43:59Z |
python
| 2020-01-21T18:42:32Z |
lib/ansible/plugins/action/reboot.py
|
# Copyright: (c) 2016-2018, Matt Davis <[email protected]>
# Copyright: (c) 2018, Sam Doran <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import random
import time
from datetime import datetime, timedelta
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common.collections import is_string
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset(('connect_timeout', 'msg', 'post_reboot_delay', 'pre_reboot_delay', 'test_command', 'reboot_timeout', 'search_paths'))
DEFAULT_REBOOT_TIMEOUT = 600
DEFAULT_CONNECT_TIMEOUT = None
DEFAULT_PRE_REBOOT_DELAY = 0
DEFAULT_POST_REBOOT_DELAY = 0
DEFAULT_TEST_COMMAND = 'whoami'
DEFAULT_BOOT_TIME_COMMAND = 'cat /proc/sys/kernel/random/boot_id'
DEFAULT_REBOOT_MESSAGE = 'Reboot initiated by Ansible'
DEFAULT_SHUTDOWN_COMMAND = 'shutdown'
DEFAULT_SHUTDOWN_COMMAND_ARGS = '-r {delay_min} "{message}"'
DEFAULT_SUDOABLE = True
DEPRECATED_ARGS = {}
BOOT_TIME_COMMANDS = {
'freebsd': '/sbin/sysctl kern.boottime',
'openbsd': '/sbin/sysctl kern.boottime',
'macosx': 'who -b',
'solaris': 'who -b',
'sunos': 'who -b',
'vmkernel': 'grep booted /var/log/vmksummary.log | tail -n 1',
'aix': 'who -b',
}
SHUTDOWN_COMMANDS = {
'alpine': 'reboot',
'vmkernel': 'reboot',
}
SHUTDOWN_COMMAND_ARGS = {
'alpine': '',
'freebsd': '-r +{delay_sec}s "{message}"',
'linux': DEFAULT_SHUTDOWN_COMMAND_ARGS,
'macosx': '-r +{delay_min} "{message}"',
'openbsd': '-r +{delay_min} "{message}"',
'solaris': '-y -g {delay_sec} -i 6 "{message}"',
'sunos': '-y -g {delay_sec} -i 6 "{message}"',
'vmkernel': '-d {delay_sec}',
'aix': '-Fr',
}
TEST_COMMANDS = {
'solaris': 'who',
'vmkernel': 'who',
}
def __init__(self, *args, **kwargs):
super(ActionModule, self).__init__(*args, **kwargs)
@property
def pre_reboot_delay(self):
return self._check_delay('pre_reboot_delay', self.DEFAULT_PRE_REBOOT_DELAY)
@property
def post_reboot_delay(self):
return self._check_delay('post_reboot_delay', self.DEFAULT_POST_REBOOT_DELAY)
def _check_delay(self, key, default):
"""Ensure that the value is positive or zero"""
value = int(self._task.args.get(key, self._task.args.get(key + '_sec', default)))
if value < 0:
value = 0
return value
def _get_value_from_facts(self, variable_name, distribution, default_value):
"""Get dist+version specific args first, then distribution, then family, lastly use default"""
attr = getattr(self, variable_name)
value = attr.get(
distribution['name'] + distribution['version'],
attr.get(
distribution['name'],
attr.get(
distribution['family'],
getattr(self, default_value))))
return value
def get_shutdown_command_args(self, distribution):
args = self._get_value_from_facts('SHUTDOWN_COMMAND_ARGS', distribution, 'DEFAULT_SHUTDOWN_COMMAND_ARGS')
# Convert seconds to minutes. If less that 60, set it to 0.
delay_min = self.pre_reboot_delay // 60
reboot_message = self._task.args.get('msg', self.DEFAULT_REBOOT_MESSAGE)
return args.format(delay_sec=self.pre_reboot_delay, delay_min=delay_min, message=reboot_message)
def get_distribution(self, task_vars):
distribution = {}
display.debug('{action}: running setup module to get distribution'.format(action=self._task.action))
module_output = self._execute_module(
task_vars=task_vars,
module_name='setup',
module_args={'gather_subset': 'min'})
try:
if module_output.get('failed', False):
raise AnsibleError('Failed to determine system distribution. {0}, {1}'.format(
to_native(module_output['module_stdout']).strip(),
to_native(module_output['module_stderr']).strip()))
distribution['name'] = module_output['ansible_facts']['ansible_distribution'].lower()
distribution['version'] = to_text(module_output['ansible_facts']['ansible_distribution_version'].split('.')[0])
distribution['family'] = to_text(module_output['ansible_facts']['ansible_os_family'].lower())
display.debug("{action}: distribution: {dist}".format(action=self._task.action, dist=distribution))
return distribution
except KeyError as ke:
raise AnsibleError('Failed to get distribution information. Missing "{0}" in output.'.format(ke.args[0]))
def get_shutdown_command(self, task_vars, distribution):
shutdown_bin = self._get_value_from_facts('SHUTDOWN_COMMANDS', distribution, 'DEFAULT_SHUTDOWN_COMMAND')
default_search_paths = ['/sbin', '/usr/sbin', '/usr/local/sbin']
search_paths = self._task.args.get('search_paths', default_search_paths)
# FIXME: switch all this to user arg spec validation methods when they are available
# Convert bare strings to a list
if is_string(search_paths):
search_paths = [search_paths]
# Error if we didn't get a list
err_msg = "'search_paths' must be a string or flat list of strings, got {0}"
try:
incorrect_type = any(not is_string(x) for x in search_paths)
if not isinstance(search_paths, list) or incorrect_type:
raise TypeError
except TypeError:
raise AnsibleError(err_msg.format(search_paths))
display.debug('{action}: running find module looking in {paths} to get path for "{command}"'.format(
action=self._task.action,
command=shutdown_bin,
paths=search_paths))
find_result = self._execute_module(
task_vars=task_vars,
module_name='find',
module_args={
'paths': search_paths,
'patterns': [shutdown_bin],
'file_type': 'any'
}
)
full_path = [x['path'] for x in find_result['files']]
if not full_path:
raise AnsibleError('Unable to find command "{0}" in search paths: {1}'.format(shutdown_bin, search_paths))
self._shutdown_command = full_path[0]
return self._shutdown_command
def deprecated_args(self):
for arg, version in self.DEPRECATED_ARGS.items():
if self._task.args.get(arg) is not None:
display.warning("Since Ansible {version}, {arg} is no longer a valid option for {action}".format(
version=version,
arg=arg,
action=self._task.action))
def get_system_boot_time(self, distribution):
boot_time_command = self._get_value_from_facts('BOOT_TIME_COMMANDS', distribution, 'DEFAULT_BOOT_TIME_COMMAND')
display.debug("{action}: getting boot time with command: '{command}'".format(action=self._task.action, command=boot_time_command))
command_result = self._low_level_execute_command(boot_time_command, sudoable=self.DEFAULT_SUDOABLE)
if command_result['rc'] != 0:
stdout = command_result['stdout']
stderr = command_result['stderr']
raise AnsibleError("{action}: failed to get host boot time info, rc: {rc}, stdout: {out}, stderr: {err}".format(
action=self._task.action,
rc=command_result['rc'],
out=to_native(stdout),
err=to_native(stderr)))
display.debug("{action}: last boot time: {boot}".format(action=self._task.action, boot=command_result['stdout'].strip()))
return command_result['stdout'].strip()
def check_boot_time(self, distribution, previous_boot_time):
display.vvv("{action}: attempting to get system boot time".format(action=self._task.action))
connect_timeout = self._task.args.get('connect_timeout', self._task.args.get('connect_timeout_sec', self.DEFAULT_CONNECT_TIMEOUT))
# override connection timeout from defaults to custom value
if connect_timeout:
try:
display.debug("{action}: setting connect_timeout to {value}".format(action=self._task.action, value=connect_timeout))
self._connection.set_option("connection_timeout", connect_timeout)
self._connection.reset()
except AttributeError:
display.warning("Connection plugin does not allow the connection timeout to be overridden")
# try and get boot time
try:
current_boot_time = self.get_system_boot_time(distribution)
except Exception as e:
raise e
# FreeBSD returns an empty string immediately before reboot so adding a length
# check to prevent prematurely assuming system has rebooted
if len(current_boot_time) == 0 or current_boot_time == previous_boot_time:
raise ValueError("boot time has not changed")
def run_test_command(self, distribution, **kwargs):
test_command = self._task.args.get('test_command', self._get_value_from_facts('TEST_COMMANDS', distribution, 'DEFAULT_TEST_COMMAND'))
display.vvv("{action}: attempting post-reboot test command".format(action=self._task.action))
display.debug("{action}: attempting post-reboot test command '{command}'".format(action=self._task.action, command=test_command))
try:
command_result = self._low_level_execute_command(test_command, sudoable=self.DEFAULT_SUDOABLE)
except Exception:
# may need to reset the connection in case another reboot occurred
# which has invalidated our connection
try:
self._connection.reset()
except AttributeError:
pass
raise
if command_result['rc'] != 0:
msg = 'Test command failed: {err} {out}'.format(
err=to_native(command_result['stderr']),
out=to_native(command_result['stdout']))
raise RuntimeError(msg)
display.vvv("{action}: system successfully rebooted".format(action=self._task.action))
def do_until_success_or_timeout(self, action, reboot_timeout, action_desc, distribution, action_kwargs=None):
max_end_time = datetime.utcnow() + timedelta(seconds=reboot_timeout)
if action_kwargs is None:
action_kwargs = {}
fail_count = 0
max_fail_sleep = 12
while datetime.utcnow() < max_end_time:
try:
action(distribution=distribution, **action_kwargs)
if action_desc:
display.debug('{action}: {desc} success'.format(action=self._task.action, desc=action_desc))
return
except Exception as e:
if isinstance(e, AnsibleConnectionFailure):
try:
self._connection.reset()
except AnsibleConnectionFailure:
pass
# Use exponential backoff with a max timout, plus a little bit of randomness
random_int = random.randint(0, 1000) / 1000
fail_sleep = 2 ** fail_count + random_int
if fail_sleep > max_fail_sleep:
fail_sleep = max_fail_sleep + random_int
if action_desc:
try:
error = to_text(e).splitlines()[-1]
except IndexError as e:
error = to_text(e)
display.debug("{action}: {desc} fail '{err}', retrying in {sleep:.4} seconds...".format(
action=self._task.action,
desc=action_desc,
err=error,
sleep=fail_sleep))
fail_count += 1
time.sleep(fail_sleep)
raise TimedOutException('Timed out waiting for {desc} (timeout={timeout})'.format(desc=action_desc, timeout=reboot_timeout))
def perform_reboot(self, task_vars, distribution):
result = {}
reboot_result = {}
shutdown_command = self.get_shutdown_command(task_vars, distribution)
shutdown_command_args = self.get_shutdown_command_args(distribution)
reboot_command = '{0} {1}'.format(shutdown_command, shutdown_command_args)
try:
display.vvv("{action}: rebooting server...".format(action=self._task.action))
display.debug("{action}: rebooting server with command '{command}'".format(action=self._task.action, command=reboot_command))
reboot_result = self._low_level_execute_command(reboot_command, sudoable=self.DEFAULT_SUDOABLE)
except AnsibleConnectionFailure as e:
# If the connection is closed too quickly due to the system being shutdown, carry on
display.debug('{action}: AnsibleConnectionFailure caught and handled: {error}'.format(action=self._task.action, error=to_text(e)))
reboot_result['rc'] = 0
result['start'] = datetime.utcnow()
if reboot_result['rc'] != 0:
result['failed'] = True
result['rebooted'] = False
result['msg'] = "Reboot command failed. Error was {stdout}, {stderr}".format(
stdout=to_native(reboot_result['stdout'].strip()),
stderr=to_native(reboot_result['stderr'].strip()))
return result
result['failed'] = False
return result
def validate_reboot(self, distribution, original_connection_timeout=None, action_kwargs=None):
display.vvv('{action}: validating reboot'.format(action=self._task.action))
result = {}
try:
# keep on checking system boot_time with short connection responses
reboot_timeout = int(self._task.args.get('reboot_timeout', self._task.args.get('reboot_timeout_sec', self.DEFAULT_REBOOT_TIMEOUT)))
self.do_until_success_or_timeout(
action=self.check_boot_time,
action_desc="last boot time check",
reboot_timeout=reboot_timeout,
distribution=distribution,
action_kwargs=action_kwargs)
# Get the connect_timeout set on the connection to compare to the original
try:
connect_timeout = self._connection.get_option('connection_timeout')
except KeyError:
pass
else:
if original_connection_timeout != connect_timeout:
try:
display.debug("{action}: setting connect_timeout back to original value of {value}".format(
action=self._task.action,
value=original_connection_timeout))
self._connection.set_option("connection_timeout", original_connection_timeout)
self._connection.reset()
except (AnsibleError, AttributeError) as e:
# reset the connection to clear the custom connection timeout
display.debug("{action}: failed to reset connection_timeout back to default: {error}".format(action=self._task.action,
error=to_text(e)))
# finally run test command to ensure everything is working
# FUTURE: add a stability check (system must remain up for N seconds) to deal with self-multi-reboot updates
self.do_until_success_or_timeout(
action=self.run_test_command,
action_desc="post-reboot test command",
reboot_timeout=reboot_timeout,
distribution=distribution,
action_kwargs=action_kwargs)
result['rebooted'] = True
result['changed'] = True
except TimedOutException as toex:
result['failed'] = True
result['rebooted'] = True
result['msg'] = to_text(toex)
return result
return result
def run(self, tmp=None, task_vars=None):
self._supports_check_mode = True
self._supports_async = True
# If running with local connection, fail so we don't reboot ourself
if self._connection.transport == 'local':
msg = 'Running {0} with local connection would reboot the control node.'.format(self._task.action)
return {'changed': False, 'elapsed': 0, 'rebooted': False, 'failed': True, 'msg': msg}
if self._play_context.check_mode:
return {'changed': True, 'elapsed': 0, 'rebooted': True}
if task_vars is None:
task_vars = {}
self.deprecated_args()
result = super(ActionModule, self).run(tmp, task_vars)
if result.get('skipped', False) or result.get('failed', False):
return result
distribution = self.get_distribution(task_vars)
# Get current boot time
try:
previous_boot_time = self.get_system_boot_time(distribution)
except Exception as e:
result['failed'] = True
result['reboot'] = False
result['msg'] = to_text(e)
return result
# Get the original connection_timeout option var so it can be reset after
original_connection_timeout = None
try:
original_connection_timeout = self._connection.get_option('connection_timeout')
display.debug("{action}: saving original connect_timeout of {timeout}".format(action=self._task.action, timeout=original_connection_timeout))
except KeyError:
display.debug("{action}: connect_timeout connection option has not been set".format(action=self._task.action))
# Initiate reboot
reboot_result = self.perform_reboot(task_vars, distribution)
if reboot_result['failed']:
result = reboot_result
elapsed = datetime.utcnow() - reboot_result['start']
result['elapsed'] = elapsed.seconds
return result
if self.post_reboot_delay != 0:
display.debug("{action}: waiting an additional {delay} seconds".format(action=self._task.action, delay=self.post_reboot_delay))
display.vvv("{action}: waiting an additional {delay} seconds".format(action=self._task.action, delay=self.post_reboot_delay))
time.sleep(self.post_reboot_delay)
# Make sure reboot was successful
result = self.validate_reboot(distribution, original_connection_timeout, action_kwargs={'previous_boot_time': previous_boot_time})
elapsed = datetime.utcnow() - reboot_result['start']
result['elapsed'] = elapsed.seconds
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,471 |
lib/ansible/module_utils/basic.py contains deprecation which was supposed to be removed for 2.9
|
##### SUMMARY
See https://github.com/ansible/ansible/blob/88d8cf8197c53edd3bcdcd21429eb4c2bfbf0f6a/lib/ansible/module_utils/basic.py#L699-L706
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/65471
|
https://github.com/ansible/ansible/pull/65745
|
0b503f6057b5e60d84a3ee7fe11914eeacc05656
|
c58d8ed1f5f7f47f2a1d8069e04452353c052824
| 2019-12-03T18:53:41Z |
python
| 2020-01-21T21:58:26Z |
changelogs/fragments/remove-2.9-deprecations.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,471 |
lib/ansible/module_utils/basic.py contains deprecation which was supposed to be removed for 2.9
|
##### SUMMARY
See https://github.com/ansible/ansible/blob/88d8cf8197c53edd3bcdcd21429eb4c2bfbf0f6a/lib/ansible/module_utils/basic.py#L699-L706
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/65471
|
https://github.com/ansible/ansible/pull/65745
|
0b503f6057b5e60d84a3ee7fe11914eeacc05656
|
c58d8ed1f5f7f47f2a1d8069e04452353c052824
| 2019-12-03T18:53:41Z |
python
| 2020-01-21T21:58:26Z |
lib/ansible/module_utils/azure_rm_common.py
|
# Copyright (c) 2016 Matt Davis, <[email protected]>
# Chris Houseknecht, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
import os
import re
import types
import copy
import inspect
import traceback
import json
from os.path import expanduser
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
try:
from ansible.module_utils.ansible_release import __version__ as ANSIBLE_VERSION
except Exception:
ANSIBLE_VERSION = 'unknown'
from ansible.module_utils.six.moves import configparser
import ansible.module_utils.six.moves.urllib.parse as urlparse
AZURE_COMMON_ARGS = dict(
auth_source=dict(
type='str',
choices=['auto', 'cli', 'env', 'credential_file', 'msi']
),
profile=dict(type='str'),
subscription_id=dict(type='str'),
client_id=dict(type='str', no_log=True),
secret=dict(type='str', no_log=True),
tenant=dict(type='str', no_log=True),
ad_user=dict(type='str', no_log=True),
password=dict(type='str', no_log=True),
cloud_environment=dict(type='str', default='AzureCloud'),
cert_validation_mode=dict(type='str', choices=['validate', 'ignore']),
api_profile=dict(type='str', default='latest'),
adfs_authority_url=dict(type='str', default=None)
)
AZURE_CREDENTIAL_ENV_MAPPING = dict(
profile='AZURE_PROFILE',
subscription_id='AZURE_SUBSCRIPTION_ID',
client_id='AZURE_CLIENT_ID',
secret='AZURE_SECRET',
tenant='AZURE_TENANT',
ad_user='AZURE_AD_USER',
password='AZURE_PASSWORD',
cloud_environment='AZURE_CLOUD_ENVIRONMENT',
cert_validation_mode='AZURE_CERT_VALIDATION_MODE',
adfs_authority_url='AZURE_ADFS_AUTHORITY_URL'
)
class SDKProfile(object): # pylint: disable=too-few-public-methods
def __init__(self, default_api_version, profile=None):
"""Constructor.
:param str default_api_version: Default API version if not overridden by a profile. Nullable.
:param profile: A dict operation group name to API version.
:type profile: dict[str, str]
"""
self.profile = profile if profile is not None else {}
self.profile[None] = default_api_version
@property
def default_api_version(self):
return self.profile[None]
# FUTURE: this should come from the SDK or an external location.
# For now, we have to copy from azure-cli
AZURE_API_PROFILES = {
'latest': {
'ContainerInstanceManagementClient': '2018-02-01-preview',
'ComputeManagementClient': dict(
default_api_version='2018-10-01',
resource_skus='2018-10-01',
disks='2018-06-01',
snapshots='2018-10-01',
virtual_machine_run_commands='2018-10-01'
),
'NetworkManagementClient': '2018-08-01',
'ResourceManagementClient': '2017-05-10',
'StorageManagementClient': '2017-10-01',
'WebSiteManagementClient': '2018-02-01',
'PostgreSQLManagementClient': '2017-12-01',
'MySQLManagementClient': '2017-12-01',
'MariaDBManagementClient': '2019-03-01',
'ManagementLockClient': '2016-09-01'
},
'2019-03-01-hybrid': {
'StorageManagementClient': '2017-10-01',
'NetworkManagementClient': '2017-10-01',
'ComputeManagementClient': SDKProfile('2017-12-01', {
'resource_skus': '2017-09-01',
'disks': '2017-03-30',
'snapshots': '2017-03-30'
}),
'ManagementLinkClient': '2016-09-01',
'ManagementLockClient': '2016-09-01',
'PolicyClient': '2016-12-01',
'ResourceManagementClient': '2018-05-01',
'SubscriptionClient': '2016-06-01',
'DnsManagementClient': '2016-04-01',
'KeyVaultManagementClient': '2016-10-01',
'AuthorizationManagementClient': SDKProfile('2015-07-01', {
'classic_administrators': '2015-06-01',
'policy_assignments': '2016-12-01',
'policy_definitions': '2016-12-01'
}),
'KeyVaultClient': '2016-10-01',
'azure.multiapi.storage': '2017-11-09',
'azure.multiapi.cosmosdb': '2017-04-17'
},
'2018-03-01-hybrid': {
'StorageManagementClient': '2016-01-01',
'NetworkManagementClient': '2017-10-01',
'ComputeManagementClient': SDKProfile('2017-03-30'),
'ManagementLinkClient': '2016-09-01',
'ManagementLockClient': '2016-09-01',
'PolicyClient': '2016-12-01',
'ResourceManagementClient': '2018-02-01',
'SubscriptionClient': '2016-06-01',
'DnsManagementClient': '2016-04-01',
'KeyVaultManagementClient': '2016-10-01',
'AuthorizationManagementClient': SDKProfile('2015-07-01', {
'classic_administrators': '2015-06-01'
}),
'KeyVaultClient': '2016-10-01',
'azure.multiapi.storage': '2017-04-17',
'azure.multiapi.cosmosdb': '2017-04-17'
},
'2017-03-09-profile': {
'StorageManagementClient': '2016-01-01',
'NetworkManagementClient': '2015-06-15',
'ComputeManagementClient': SDKProfile('2016-03-30'),
'ManagementLinkClient': '2016-09-01',
'ManagementLockClient': '2015-01-01',
'PolicyClient': '2015-10-01-preview',
'ResourceManagementClient': '2016-02-01',
'SubscriptionClient': '2016-06-01',
'DnsManagementClient': '2016-04-01',
'KeyVaultManagementClient': '2016-10-01',
'AuthorizationManagementClient': SDKProfile('2015-07-01', {
'classic_administrators': '2015-06-01'
}),
'KeyVaultClient': '2016-10-01',
'azure.multiapi.storage': '2015-04-05'
}
}
AZURE_TAG_ARGS = dict(
tags=dict(type='dict'),
append_tags=dict(type='bool', default=True),
)
AZURE_COMMON_REQUIRED_IF = [
('log_mode', 'file', ['log_path'])
]
ANSIBLE_USER_AGENT = 'Ansible/{0}'.format(ANSIBLE_VERSION)
CLOUDSHELL_USER_AGENT_KEY = 'AZURE_HTTP_USER_AGENT'
VSCODEEXT_USER_AGENT_KEY = 'VSCODEEXT_USER_AGENT'
CIDR_PATTERN = re.compile(r"(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1"
r"[0-9]{2}|2[0-4][0-9]|25[0-5])(/([0-9]|[1-2][0-9]|3[0-2]))")
AZURE_SUCCESS_STATE = "Succeeded"
AZURE_FAILED_STATE = "Failed"
HAS_AZURE = True
HAS_AZURE_EXC = None
HAS_AZURE_CLI_CORE = True
HAS_AZURE_CLI_CORE_EXC = None
HAS_MSRESTAZURE = True
HAS_MSRESTAZURE_EXC = None
try:
import importlib
except ImportError:
# This passes the sanity import test, but does not provide a user friendly error message.
# Doing so would require catching Exception for all imports of Azure dependencies in modules and module_utils.
importlib = None
try:
from packaging.version import Version
HAS_PACKAGING_VERSION = True
HAS_PACKAGING_VERSION_EXC = None
except ImportError:
Version = None
HAS_PACKAGING_VERSION = False
HAS_PACKAGING_VERSION_EXC = traceback.format_exc()
# NB: packaging issue sometimes cause msrestazure not to be installed, check it separately
try:
from msrest.serialization import Serializer
except ImportError:
HAS_MSRESTAZURE_EXC = traceback.format_exc()
HAS_MSRESTAZURE = False
try:
from enum import Enum
from msrestazure.azure_active_directory import AADTokenCredentials
from msrestazure.azure_exceptions import CloudError
from msrestazure.azure_active_directory import MSIAuthentication
from msrestazure.tools import parse_resource_id, resource_id, is_valid_resource_id
from msrestazure import azure_cloud
from azure.common.credentials import ServicePrincipalCredentials, UserPassCredentials
from azure.mgmt.monitor.version import VERSION as monitor_client_version
from azure.mgmt.network.version import VERSION as network_client_version
from azure.mgmt.storage.version import VERSION as storage_client_version
from azure.mgmt.compute.version import VERSION as compute_client_version
from azure.mgmt.resource.version import VERSION as resource_client_version
from azure.mgmt.dns.version import VERSION as dns_client_version
from azure.mgmt.web.version import VERSION as web_client_version
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.resource.resources import ResourceManagementClient
from azure.mgmt.resource.subscriptions import SubscriptionClient
from azure.mgmt.storage import StorageManagementClient
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.dns import DnsManagementClient
from azure.mgmt.monitor import MonitorManagementClient
from azure.mgmt.web import WebSiteManagementClient
from azure.mgmt.containerservice import ContainerServiceClient
from azure.mgmt.marketplaceordering import MarketplaceOrderingAgreements
from azure.mgmt.trafficmanager import TrafficManagerManagementClient
from azure.storage.cloudstorageaccount import CloudStorageAccount
from azure.storage.blob import PageBlobService, BlockBlobService
from adal.authentication_context import AuthenticationContext
from azure.mgmt.sql import SqlManagementClient
from azure.mgmt.servicebus import ServiceBusManagementClient
import azure.mgmt.servicebus.models as ServicebusModel
from azure.mgmt.rdbms.postgresql import PostgreSQLManagementClient
from azure.mgmt.rdbms.mysql import MySQLManagementClient
from azure.mgmt.rdbms.mariadb import MariaDBManagementClient
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt.containerinstance import ContainerInstanceManagementClient
from azure.mgmt.loganalytics import LogAnalyticsManagementClient
import azure.mgmt.loganalytics.models as LogAnalyticsModels
from azure.mgmt.automation import AutomationClient
import azure.mgmt.automation.models as AutomationModel
from azure.mgmt.iothub import IotHubClient
from azure.mgmt.iothub import models as IoTHubModels
from msrest.service_client import ServiceClient
from msrestazure import AzureConfiguration
from msrest.authentication import Authentication
from azure.mgmt.resource.locks import ManagementLockClient
except ImportError as exc:
Authentication = object
HAS_AZURE_EXC = traceback.format_exc()
HAS_AZURE = False
from base64 import b64encode, b64decode
from hashlib import sha256
from hmac import HMAC
from time import time
try:
from urllib import (urlencode, quote_plus)
except ImportError:
from urllib.parse import (urlencode, quote_plus)
try:
from azure.cli.core.util import CLIError
from azure.common.credentials import get_azure_cli_credentials, get_cli_profile
from azure.common.cloud import get_cli_active_cloud
except ImportError:
HAS_AZURE_CLI_CORE = False
HAS_AZURE_CLI_CORE_EXC = None
CLIError = Exception
def azure_id_to_dict(id):
pieces = re.sub(r'^\/', '', id).split('/')
result = {}
index = 0
while index < len(pieces) - 1:
result[pieces[index]] = pieces[index + 1]
index += 1
return result
def format_resource_id(val, subscription_id, namespace, types, resource_group):
return resource_id(name=val,
resource_group=resource_group,
namespace=namespace,
type=types,
subscription=subscription_id) if not is_valid_resource_id(val) else val
def normalize_location_name(name):
return name.replace(' ', '').lower()
# FUTURE: either get this from the requirements file (if we can be sure it's always available at runtime)
# or generate the requirements files from this so we only have one source of truth to maintain...
AZURE_PKG_VERSIONS = {
'StorageManagementClient': {
'package_name': 'storage',
'expected_version': '3.1.0'
},
'ComputeManagementClient': {
'package_name': 'compute',
'expected_version': '4.4.0'
},
'ContainerInstanceManagementClient': {
'package_name': 'containerinstance',
'expected_version': '0.4.0'
},
'NetworkManagementClient': {
'package_name': 'network',
'expected_version': '2.3.0'
},
'ResourceManagementClient': {
'package_name': 'resource',
'expected_version': '2.1.0'
},
'DnsManagementClient': {
'package_name': 'dns',
'expected_version': '2.1.0'
},
'WebSiteManagementClient': {
'package_name': 'web',
'expected_version': '0.41.0'
},
'TrafficManagerManagementClient': {
'package_name': 'trafficmanager',
'expected_version': '0.50.0'
},
} if HAS_AZURE else {}
AZURE_MIN_RELEASE = '2.0.0'
class AzureRMModuleBase(object):
def __init__(self, derived_arg_spec, bypass_checks=False, no_log=False,
check_invalid_arguments=None, mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False, supports_check_mode=False,
required_if=None, supports_tags=True, facts_module=False, skip_exec=False):
merged_arg_spec = dict()
merged_arg_spec.update(AZURE_COMMON_ARGS)
if supports_tags:
merged_arg_spec.update(AZURE_TAG_ARGS)
if derived_arg_spec:
merged_arg_spec.update(derived_arg_spec)
merged_required_if = list(AZURE_COMMON_REQUIRED_IF)
if required_if:
merged_required_if += required_if
self.module = AnsibleModule(argument_spec=merged_arg_spec,
bypass_checks=bypass_checks,
no_log=no_log,
check_invalid_arguments=check_invalid_arguments,
mutually_exclusive=mutually_exclusive,
required_together=required_together,
required_one_of=required_one_of,
add_file_common_args=add_file_common_args,
supports_check_mode=supports_check_mode,
required_if=merged_required_if)
if not HAS_PACKAGING_VERSION:
self.fail(msg=missing_required_lib('packaging'),
exception=HAS_PACKAGING_VERSION_EXC)
if not HAS_MSRESTAZURE:
self.fail(msg=missing_required_lib('msrestazure'),
exception=HAS_MSRESTAZURE_EXC)
if not HAS_AZURE:
self.fail(msg=missing_required_lib('ansible[azure] (azure >= {0})'.format(AZURE_MIN_RELEASE)),
exception=HAS_AZURE_EXC)
self._network_client = None
self._storage_client = None
self._resource_client = None
self._compute_client = None
self._dns_client = None
self._web_client = None
self._marketplace_client = None
self._sql_client = None
self._mysql_client = None
self._mariadb_client = None
self._postgresql_client = None
self._containerregistry_client = None
self._containerinstance_client = None
self._containerservice_client = None
self._managedcluster_client = None
self._traffic_manager_management_client = None
self._monitor_client = None
self._resource = None
self._log_analytics_client = None
self._servicebus_client = None
self._automation_client = None
self._IoThub_client = None
self._lock_client = None
self.check_mode = self.module.check_mode
self.api_profile = self.module.params.get('api_profile')
self.facts_module = facts_module
# self.debug = self.module.params.get('debug')
# delegate auth to AzureRMAuth class (shared with all plugin types)
self.azure_auth = AzureRMAuth(fail_impl=self.fail, **self.module.params)
# common parameter validation
if self.module.params.get('tags'):
self.validate_tags(self.module.params['tags'])
if not skip_exec:
res = self.exec_module(**self.module.params)
self.module.exit_json(**res)
def check_client_version(self, client_type):
# Ensure Azure modules are at least 2.0.0rc5.
package_version = AZURE_PKG_VERSIONS.get(client_type.__name__, None)
if package_version is not None:
client_name = package_version.get('package_name')
try:
client_module = importlib.import_module(client_type.__module__)
client_version = client_module.VERSION
except (RuntimeError, AttributeError):
# can't get at the module version for some reason, just fail silently...
return
expected_version = package_version.get('expected_version')
if Version(client_version) < Version(expected_version):
self.fail("Installed azure-mgmt-{0} client version is {1}. The minimum supported version is {2}. Try "
"`pip install ansible[azure]`".format(client_name, client_version, expected_version))
if Version(client_version) != Version(expected_version):
self.module.warn("Installed azure-mgmt-{0} client version is {1}. The expected version is {2}. Try "
"`pip install ansible[azure]`".format(client_name, client_version, expected_version))
def exec_module(self, **kwargs):
self.fail("Error: {0} failed to implement exec_module method.".format(self.__class__.__name__))
def fail(self, msg, **kwargs):
'''
Shortcut for calling module.fail()
:param msg: Error message text.
:param kwargs: Any key=value pairs
:return: None
'''
self.module.fail_json(msg=msg, **kwargs)
def deprecate(self, msg, version=None):
self.module.deprecate(msg, version)
def log(self, msg, pretty_print=False):
if pretty_print:
self.module.debug(json.dumps(msg, indent=4, sort_keys=True))
else:
self.module.debug(msg)
def validate_tags(self, tags):
'''
Check if tags dictionary contains string:string pairs.
:param tags: dictionary of string:string pairs
:return: None
'''
if not self.facts_module:
if not isinstance(tags, dict):
self.fail("Tags must be a dictionary of string:string values.")
for key, value in tags.items():
if not isinstance(value, str):
self.fail("Tags values must be strings. Found {0}:{1}".format(str(key), str(value)))
def update_tags(self, tags):
'''
Call from the module to update metadata tags. Returns tuple
with bool indicating if there was a change and dict of new
tags to assign to the object.
:param tags: metadata tags from the object
:return: bool, dict
'''
tags = tags or dict()
new_tags = copy.copy(tags) if isinstance(tags, dict) else dict()
param_tags = self.module.params.get('tags') if isinstance(self.module.params.get('tags'), dict) else dict()
append_tags = self.module.params.get('append_tags') if self.module.params.get('append_tags') is not None else True
changed = False
# check add or update
for key, value in param_tags.items():
if not new_tags.get(key) or new_tags[key] != value:
changed = True
new_tags[key] = value
# check remove
if not append_tags:
for key, value in tags.items():
if not param_tags.get(key):
new_tags.pop(key)
changed = True
return changed, new_tags
def has_tags(self, obj_tags, tag_list):
'''
Used in fact modules to compare object tags to list of parameter tags. Return true if list of parameter tags
exists in object tags.
:param obj_tags: dictionary of tags from an Azure object.
:param tag_list: list of tag keys or tag key:value pairs
:return: bool
'''
if not obj_tags and tag_list:
return False
if not tag_list:
return True
matches = 0
result = False
for tag in tag_list:
tag_key = tag
tag_value = None
if ':' in tag:
tag_key, tag_value = tag.split(':')
if tag_value and obj_tags.get(tag_key) == tag_value:
matches += 1
elif not tag_value and obj_tags.get(tag_key):
matches += 1
if matches == len(tag_list):
result = True
return result
def get_resource_group(self, resource_group):
'''
Fetch a resource group.
:param resource_group: name of a resource group
:return: resource group object
'''
try:
return self.rm_client.resource_groups.get(resource_group)
except CloudError as cloud_error:
self.fail("Error retrieving resource group {0} - {1}".format(resource_group, cloud_error.message))
except Exception as exc:
self.fail("Error retrieving resource group {0} - {1}".format(resource_group, str(exc)))
def parse_resource_to_dict(self, resource):
'''
Return a dict of the give resource, which contains name and resource group.
:param resource: It can be a resource name, id or a dict contains name and resource group.
'''
resource_dict = parse_resource_id(resource) if not isinstance(resource, dict) else resource
resource_dict['resource_group'] = resource_dict.get('resource_group', self.resource_group)
resource_dict['subscription_id'] = resource_dict.get('subscription_id', self.subscription_id)
return resource_dict
def serialize_obj(self, obj, class_name, enum_modules=None):
'''
Return a JSON representation of an Azure object.
:param obj: Azure object
:param class_name: Name of the object's class
:param enum_modules: List of module names to build enum dependencies from.
:return: serialized result
'''
enum_modules = [] if enum_modules is None else enum_modules
dependencies = dict()
if enum_modules:
for module_name in enum_modules:
mod = importlib.import_module(module_name)
for mod_class_name, mod_class_obj in inspect.getmembers(mod, predicate=inspect.isclass):
dependencies[mod_class_name] = mod_class_obj
self.log("dependencies: ")
self.log(str(dependencies))
serializer = Serializer(classes=dependencies)
return serializer.body(obj, class_name, keep_readonly=True)
def get_poller_result(self, poller, wait=5):
'''
Consistent method of waiting on and retrieving results from Azure's long poller
:param poller Azure poller object
:return object resulting from the original request
'''
try:
delay = wait
while not poller.done():
self.log("Waiting for {0} sec".format(delay))
poller.wait(timeout=delay)
return poller.result()
except Exception as exc:
self.log(str(exc))
raise
def check_provisioning_state(self, azure_object, requested_state='present'):
'''
Check an Azure object's provisioning state. If something did not complete the provisioning
process, then we cannot operate on it.
:param azure_object An object such as a subnet, storageaccount, etc. Must have provisioning_state
and name attributes.
:return None
'''
if hasattr(azure_object, 'properties') and hasattr(azure_object.properties, 'provisioning_state') and \
hasattr(azure_object, 'name'):
# resource group object fits this model
if isinstance(azure_object.properties.provisioning_state, Enum):
if azure_object.properties.provisioning_state.value != AZURE_SUCCESS_STATE and \
requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.properties.provisioning_state, AZURE_SUCCESS_STATE))
return
if azure_object.properties.provisioning_state != AZURE_SUCCESS_STATE and \
requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.properties.provisioning_state, AZURE_SUCCESS_STATE))
return
if hasattr(azure_object, 'provisioning_state') or not hasattr(azure_object, 'name'):
if isinstance(azure_object.provisioning_state, Enum):
if azure_object.provisioning_state.value != AZURE_SUCCESS_STATE and requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.provisioning_state, AZURE_SUCCESS_STATE))
return
if azure_object.provisioning_state != AZURE_SUCCESS_STATE and requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.provisioning_state, AZURE_SUCCESS_STATE))
def get_blob_client(self, resource_group_name, storage_account_name, storage_blob_type='block'):
keys = dict()
try:
# Get keys from the storage account
self.log('Getting keys')
account_keys = self.storage_client.storage_accounts.list_keys(resource_group_name, storage_account_name)
except Exception as exc:
self.fail("Error getting keys for account {0} - {1}".format(storage_account_name, str(exc)))
try:
self.log('Create blob service')
if storage_blob_type == 'page':
return PageBlobService(endpoint_suffix=self._cloud_environment.suffixes.storage_endpoint,
account_name=storage_account_name,
account_key=account_keys.keys[0].value)
elif storage_blob_type == 'block':
return BlockBlobService(endpoint_suffix=self._cloud_environment.suffixes.storage_endpoint,
account_name=storage_account_name,
account_key=account_keys.keys[0].value)
else:
raise Exception("Invalid storage blob type defined.")
except Exception as exc:
self.fail("Error creating blob service client for storage account {0} - {1}".format(storage_account_name,
str(exc)))
def create_default_pip(self, resource_group, location, public_ip_name, allocation_method='Dynamic', sku=None):
'''
Create a default public IP address <public_ip_name> to associate with a network interface.
If a PIP address matching <public_ip_name> exists, return it. Otherwise, create one.
:param resource_group: name of an existing resource group
:param location: a valid azure location
:param public_ip_name: base name to assign the public IP address
:param allocation_method: one of 'Static' or 'Dynamic'
:param sku: sku
:return: PIP object
'''
pip = None
self.log("Starting create_default_pip {0}".format(public_ip_name))
self.log("Check to see if public IP {0} exists".format(public_ip_name))
try:
pip = self.network_client.public_ip_addresses.get(resource_group, public_ip_name)
except CloudError:
pass
if pip:
self.log("Public ip {0} found.".format(public_ip_name))
self.check_provisioning_state(pip)
return pip
params = self.network_models.PublicIPAddress(
location=location,
public_ip_allocation_method=allocation_method,
sku=sku
)
self.log('Creating default public IP {0}'.format(public_ip_name))
try:
poller = self.network_client.public_ip_addresses.create_or_update(resource_group, public_ip_name, params)
except Exception as exc:
self.fail("Error creating {0} - {1}".format(public_ip_name, str(exc)))
return self.get_poller_result(poller)
def create_default_securitygroup(self, resource_group, location, security_group_name, os_type, open_ports):
'''
Create a default security group <security_group_name> to associate with a network interface. If a security group matching
<security_group_name> exists, return it. Otherwise, create one.
:param resource_group: Resource group name
:param location: azure location name
:param security_group_name: base name to use for the security group
:param os_type: one of 'Windows' or 'Linux'. Determins any default rules added to the security group.
:param ssh_port: for os_type 'Linux' port used in rule allowing SSH access.
:param rdp_port: for os_type 'Windows' port used in rule allowing RDP access.
:return: security_group object
'''
group = None
self.log("Create security group {0}".format(security_group_name))
self.log("Check to see if security group {0} exists".format(security_group_name))
try:
group = self.network_client.network_security_groups.get(resource_group, security_group_name)
except CloudError:
pass
if group:
self.log("Security group {0} found.".format(security_group_name))
self.check_provisioning_state(group)
return group
parameters = self.network_models.NetworkSecurityGroup()
parameters.location = location
if not open_ports:
# Open default ports based on OS type
if os_type == 'Linux':
# add an inbound SSH rule
parameters.security_rules = [
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
description='Allow SSH Access',
source_port_range='*',
destination_port_range='22',
priority=100,
name='SSH')
]
parameters.location = location
else:
# for windows add inbound RDP and WinRM rules
parameters.security_rules = [
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
description='Allow RDP port 3389',
source_port_range='*',
destination_port_range='3389',
priority=100,
name='RDP01'),
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
description='Allow WinRM HTTPS port 5986',
source_port_range='*',
destination_port_range='5986',
priority=101,
name='WinRM01'),
]
else:
# Open custom ports
parameters.security_rules = []
priority = 100
for port in open_ports:
priority += 1
rule_name = "Rule_{0}".format(priority)
parameters.security_rules.append(
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
source_port_range='*',
destination_port_range=str(port),
priority=priority,
name=rule_name)
)
self.log('Creating default security group {0}'.format(security_group_name))
try:
poller = self.network_client.network_security_groups.create_or_update(resource_group,
security_group_name,
parameters)
except Exception as exc:
self.fail("Error creating default security rule {0} - {1}".format(security_group_name, str(exc)))
return self.get_poller_result(poller)
@staticmethod
def _validation_ignore_callback(session, global_config, local_config, **kwargs):
session.verify = False
def get_api_profile(self, client_type_name, api_profile_name):
profile_all_clients = AZURE_API_PROFILES.get(api_profile_name)
if not profile_all_clients:
raise KeyError("unknown Azure API profile: {0}".format(api_profile_name))
profile_raw = profile_all_clients.get(client_type_name, None)
if not profile_raw:
self.module.warn("Azure API profile {0} does not define an entry for {1}".format(api_profile_name, client_type_name))
if isinstance(profile_raw, dict):
if not profile_raw.get('default_api_version'):
raise KeyError("Azure API profile {0} does not define 'default_api_version'".format(api_profile_name))
return profile_raw
# wrap basic strings in a dict that just defines the default
return dict(default_api_version=profile_raw)
def get_mgmt_svc_client(self, client_type, base_url=None, api_version=None):
self.log('Getting management service client {0}'.format(client_type.__name__))
self.check_client_version(client_type)
client_argspec = inspect.getargspec(client_type.__init__)
if not base_url:
# most things are resource_manager, don't make everyone specify
base_url = self.azure_auth._cloud_environment.endpoints.resource_manager
client_kwargs = dict(credentials=self.azure_auth.azure_credentials, subscription_id=self.azure_auth.subscription_id, base_url=base_url)
api_profile_dict = {}
if self.api_profile:
api_profile_dict = self.get_api_profile(client_type.__name__, self.api_profile)
# unversioned clients won't accept profile; only send it if necessary
# clients without a version specified in the profile will use the default
if api_profile_dict and 'profile' in client_argspec.args:
client_kwargs['profile'] = api_profile_dict
# If the client doesn't accept api_version, it's unversioned.
# If it does, favor explicitly-specified api_version, fall back to api_profile
if 'api_version' in client_argspec.args:
profile_default_version = api_profile_dict.get('default_api_version', None)
if api_version or profile_default_version:
client_kwargs['api_version'] = api_version or profile_default_version
if 'profile' in client_kwargs:
# remove profile; only pass API version if specified
client_kwargs.pop('profile')
client = client_type(**client_kwargs)
# FUTURE: remove this once everything exposes models directly (eg, containerinstance)
try:
getattr(client, "models")
except AttributeError:
def _ansible_get_models(self, *arg, **kwarg):
return self._ansible_models
setattr(client, '_ansible_models', importlib.import_module(client_type.__module__).models)
client.models = types.MethodType(_ansible_get_models, client)
client.config = self.add_user_agent(client.config)
if self.azure_auth._cert_validation_mode == 'ignore':
client.config.session_configuration_callback = self._validation_ignore_callback
return client
def add_user_agent(self, config):
# Add user agent for Ansible
config.add_user_agent(ANSIBLE_USER_AGENT)
# Add user agent when running from Cloud Shell
if CLOUDSHELL_USER_AGENT_KEY in os.environ:
config.add_user_agent(os.environ[CLOUDSHELL_USER_AGENT_KEY])
# Add user agent when running from VSCode extension
if VSCODEEXT_USER_AGENT_KEY in os.environ:
config.add_user_agent(os.environ[VSCODEEXT_USER_AGENT_KEY])
return config
def generate_sas_token(self, **kwags):
base_url = kwags.get('base_url', None)
expiry = kwags.get('expiry', time() + 3600)
key = kwags.get('key', None)
policy = kwags.get('policy', None)
url = quote_plus(base_url)
ttl = int(expiry)
sign_key = '{0}\n{1}'.format(url, ttl)
signature = b64encode(HMAC(b64decode(key), sign_key.encode('utf-8'), sha256).digest())
result = {
'sr': url,
'sig': signature,
'se': str(ttl),
}
if policy:
result['skn'] = policy
return 'SharedAccessSignature ' + urlencode(result)
def get_data_svc_client(self, **kwags):
url = kwags.get('base_url', None)
config = AzureConfiguration(base_url='https://{0}'.format(url))
config.credentials = AzureSASAuthentication(token=self.generate_sas_token(**kwags))
config = self.add_user_agent(config)
return ServiceClient(creds=config.credentials, config=config)
# passthru methods to AzureAuth instance for backcompat
@property
def credentials(self):
return self.azure_auth.credentials
@property
def _cloud_environment(self):
return self.azure_auth._cloud_environment
@property
def subscription_id(self):
return self.azure_auth.subscription_id
@property
def storage_client(self):
self.log('Getting storage client...')
if not self._storage_client:
self._storage_client = self.get_mgmt_svc_client(StorageManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-07-01')
return self._storage_client
@property
def storage_models(self):
return StorageManagementClient.models("2018-07-01")
@property
def network_client(self):
self.log('Getting network client')
if not self._network_client:
self._network_client = self.get_mgmt_svc_client(NetworkManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2019-06-01')
return self._network_client
@property
def network_models(self):
self.log("Getting network models...")
return NetworkManagementClient.models("2018-08-01")
@property
def rm_client(self):
self.log('Getting resource manager client')
if not self._resource_client:
self._resource_client = self.get_mgmt_svc_client(ResourceManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-05-10')
return self._resource_client
@property
def rm_models(self):
self.log("Getting resource manager models")
return ResourceManagementClient.models("2017-05-10")
@property
def compute_client(self):
self.log('Getting compute client')
if not self._compute_client:
self._compute_client = self.get_mgmt_svc_client(ComputeManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2019-07-01')
return self._compute_client
@property
def compute_models(self):
self.log("Getting compute models")
return ComputeManagementClient.models("2019-07-01")
@property
def dns_client(self):
self.log('Getting dns client')
if not self._dns_client:
self._dns_client = self.get_mgmt_svc_client(DnsManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-05-01')
return self._dns_client
@property
def dns_models(self):
self.log("Getting dns models...")
return DnsManagementClient.models('2018-05-01')
@property
def web_client(self):
self.log('Getting web client')
if not self._web_client:
self._web_client = self.get_mgmt_svc_client(WebSiteManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-02-01')
return self._web_client
@property
def containerservice_client(self):
self.log('Getting container service client')
if not self._containerservice_client:
self._containerservice_client = self.get_mgmt_svc_client(ContainerServiceClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-07-01')
return self._containerservice_client
@property
def managedcluster_models(self):
self.log("Getting container service models")
return ContainerServiceClient.models('2018-03-31')
@property
def managedcluster_client(self):
self.log('Getting container service client')
if not self._managedcluster_client:
self._managedcluster_client = self.get_mgmt_svc_client(ContainerServiceClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-03-31')
return self._managedcluster_client
@property
def sql_client(self):
self.log('Getting SQL client')
if not self._sql_client:
self._sql_client = self.get_mgmt_svc_client(SqlManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._sql_client
@property
def postgresql_client(self):
self.log('Getting PostgreSQL client')
if not self._postgresql_client:
self._postgresql_client = self.get_mgmt_svc_client(PostgreSQLManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._postgresql_client
@property
def mysql_client(self):
self.log('Getting MySQL client')
if not self._mysql_client:
self._mysql_client = self.get_mgmt_svc_client(MySQLManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._mysql_client
@property
def mariadb_client(self):
self.log('Getting MariaDB client')
if not self._mariadb_client:
self._mariadb_client = self.get_mgmt_svc_client(MariaDBManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._mariadb_client
@property
def sql_client(self):
self.log('Getting SQL client')
if not self._sql_client:
self._sql_client = self.get_mgmt_svc_client(SqlManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._sql_client
@property
def containerregistry_client(self):
self.log('Getting container registry mgmt client')
if not self._containerregistry_client:
self._containerregistry_client = self.get_mgmt_svc_client(ContainerRegistryManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-10-01')
return self._containerregistry_client
@property
def containerinstance_client(self):
self.log('Getting container instance mgmt client')
if not self._containerinstance_client:
self._containerinstance_client = self.get_mgmt_svc_client(ContainerInstanceManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-06-01')
return self._containerinstance_client
@property
def marketplace_client(self):
self.log('Getting marketplace agreement client')
if not self._marketplace_client:
self._marketplace_client = self.get_mgmt_svc_client(MarketplaceOrderingAgreements,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._marketplace_client
@property
def traffic_manager_management_client(self):
self.log('Getting traffic manager client')
if not self._traffic_manager_management_client:
self._traffic_manager_management_client = self.get_mgmt_svc_client(TrafficManagerManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._traffic_manager_management_client
@property
def monitor_client(self):
self.log('Getting monitor client')
if not self._monitor_client:
self._monitor_client = self.get_mgmt_svc_client(MonitorManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._monitor_client
@property
def log_analytics_client(self):
self.log('Getting log analytics client')
if not self._log_analytics_client:
self._log_analytics_client = self.get_mgmt_svc_client(LogAnalyticsManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._log_analytics_client
@property
def log_analytics_models(self):
self.log('Getting log analytics models')
return LogAnalyticsModels
@property
def servicebus_client(self):
self.log('Getting servicebus client')
if not self._servicebus_client:
self._servicebus_client = self.get_mgmt_svc_client(ServiceBusManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._servicebus_client
@property
def servicebus_models(self):
return ServicebusModel
@property
def automation_client(self):
self.log('Getting automation client')
if not self._automation_client:
self._automation_client = self.get_mgmt_svc_client(AutomationClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._automation_client
@property
def automation_models(self):
return AutomationModel
@property
def IoThub_client(self):
self.log('Getting iothub client')
if not self._IoThub_client:
self._IoThub_client = self.get_mgmt_svc_client(IotHubClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._IoThub_client
@property
def IoThub_models(self):
return IoTHubModels
@property
def automation_client(self):
self.log('Getting automation client')
if not self._automation_client:
self._automation_client = self.get_mgmt_svc_client(AutomationClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._automation_client
@property
def automation_models(self):
return AutomationModel
@property
def lock_client(self):
self.log('Getting lock client')
if not self._lock_client:
self._lock_client = self.get_mgmt_svc_client(ManagementLockClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2016-09-01')
return self._lock_client
@property
def lock_models(self):
self.log("Getting lock models")
return ManagementLockClient.models('2016-09-01')
class AzureSASAuthentication(Authentication):
"""Simple SAS Authentication.
An implementation of Authentication in
https://github.com/Azure/msrest-for-python/blob/0732bc90bdb290e5f58c675ffdd7dbfa9acefc93/msrest/authentication.py
:param str token: SAS token
"""
def __init__(self, token):
self.token = token
def signed_session(self):
session = super(AzureSASAuthentication, self).signed_session()
session.headers['Authorization'] = self.token
return session
class AzureRMAuthException(Exception):
pass
class AzureRMAuth(object):
def __init__(self, auth_source='auto', profile=None, subscription_id=None, client_id=None, secret=None,
tenant=None, ad_user=None, password=None, cloud_environment='AzureCloud', cert_validation_mode='validate',
api_profile='latest', adfs_authority_url=None, fail_impl=None, **kwargs):
if fail_impl:
self._fail_impl = fail_impl
else:
self._fail_impl = self._default_fail_impl
self._cloud_environment = None
self._adfs_authority_url = None
# authenticate
self.credentials = self._get_credentials(
dict(auth_source=auth_source, profile=profile, subscription_id=subscription_id, client_id=client_id, secret=secret,
tenant=tenant, ad_user=ad_user, password=password, cloud_environment=cloud_environment,
cert_validation_mode=cert_validation_mode, api_profile=api_profile, adfs_authority_url=adfs_authority_url))
if not self.credentials:
if HAS_AZURE_CLI_CORE:
self.fail("Failed to get credentials. Either pass as parameters, set environment variables, "
"define a profile in ~/.azure/credentials, or log in with Azure CLI (`az login`).")
else:
self.fail("Failed to get credentials. Either pass as parameters, set environment variables, "
"define a profile in ~/.azure/credentials, or install Azure CLI and log in (`az login`).")
# cert validation mode precedence: module-arg, credential profile, env, "validate"
self._cert_validation_mode = cert_validation_mode or self.credentials.get('cert_validation_mode') or \
os.environ.get('AZURE_CERT_VALIDATION_MODE') or 'validate'
if self._cert_validation_mode not in ['validate', 'ignore']:
self.fail('invalid cert_validation_mode: {0}'.format(self._cert_validation_mode))
# if cloud_environment specified, look up/build Cloud object
raw_cloud_env = self.credentials.get('cloud_environment')
if self.credentials.get('credentials') is not None and raw_cloud_env is not None:
self._cloud_environment = raw_cloud_env
elif not raw_cloud_env:
self._cloud_environment = azure_cloud.AZURE_PUBLIC_CLOUD # SDK default
else:
# try to look up "well-known" values via the name attribute on azure_cloud members
all_clouds = [x[1] for x in inspect.getmembers(azure_cloud) if isinstance(x[1], azure_cloud.Cloud)]
matched_clouds = [x for x in all_clouds if x.name == raw_cloud_env]
if len(matched_clouds) == 1:
self._cloud_environment = matched_clouds[0]
elif len(matched_clouds) > 1:
self.fail("Azure SDK failure: more than one cloud matched for cloud_environment name '{0}'".format(raw_cloud_env))
else:
if not urlparse.urlparse(raw_cloud_env).scheme:
self.fail("cloud_environment must be an endpoint discovery URL or one of {0}".format([x.name for x in all_clouds]))
try:
self._cloud_environment = azure_cloud.get_cloud_from_metadata_endpoint(raw_cloud_env)
except Exception as e:
self.fail("cloud_environment {0} could not be resolved: {1}".format(raw_cloud_env, e.message), exception=traceback.format_exc())
if self.credentials.get('subscription_id', None) is None and self.credentials.get('credentials') is None:
self.fail("Credentials did not include a subscription_id value.")
self.log("setting subscription_id")
self.subscription_id = self.credentials['subscription_id']
# get authentication authority
# for adfs, user could pass in authority or not.
# for others, use default authority from cloud environment
if self.credentials.get('adfs_authority_url') is None:
self._adfs_authority_url = self._cloud_environment.endpoints.active_directory
else:
self._adfs_authority_url = self.credentials.get('adfs_authority_url')
# get resource from cloud environment
self._resource = self._cloud_environment.endpoints.active_directory_resource_id
if self.credentials.get('credentials') is not None:
# AzureCLI credentials
self.azure_credentials = self.credentials['credentials']
elif self.credentials.get('client_id') is not None and \
self.credentials.get('secret') is not None and \
self.credentials.get('tenant') is not None:
self.azure_credentials = ServicePrincipalCredentials(client_id=self.credentials['client_id'],
secret=self.credentials['secret'],
tenant=self.credentials['tenant'],
cloud_environment=self._cloud_environment,
verify=self._cert_validation_mode == 'validate')
elif self.credentials.get('ad_user') is not None and \
self.credentials.get('password') is not None and \
self.credentials.get('client_id') is not None and \
self.credentials.get('tenant') is not None:
self.azure_credentials = self.acquire_token_with_username_password(
self._adfs_authority_url,
self._resource,
self.credentials['ad_user'],
self.credentials['password'],
self.credentials['client_id'],
self.credentials['tenant'])
elif self.credentials.get('ad_user') is not None and self.credentials.get('password') is not None:
tenant = self.credentials.get('tenant')
if not tenant:
tenant = 'common' # SDK default
self.azure_credentials = UserPassCredentials(self.credentials['ad_user'],
self.credentials['password'],
tenant=tenant,
cloud_environment=self._cloud_environment,
verify=self._cert_validation_mode == 'validate')
else:
self.fail("Failed to authenticate with provided credentials. Some attributes were missing. "
"Credentials must include client_id, secret and tenant or ad_user and password, or "
"ad_user, password, client_id, tenant and adfs_authority_url(optional) for ADFS authentication, or "
"be logged in using AzureCLI.")
def fail(self, msg, exception=None, **kwargs):
self._fail_impl(msg)
def _default_fail_impl(self, msg, exception=None, **kwargs):
raise AzureRMAuthException(msg)
def _get_profile(self, profile="default"):
path = expanduser("~/.azure/credentials")
try:
config = configparser.ConfigParser()
config.read(path)
except Exception as exc:
self.fail("Failed to access {0}. Check that the file exists and you have read "
"access. {1}".format(path, str(exc)))
credentials = dict()
for key in AZURE_CREDENTIAL_ENV_MAPPING:
try:
credentials[key] = config.get(profile, key, raw=True)
except Exception:
pass
if credentials.get('subscription_id'):
return credentials
return None
def _get_msi_credentials(self, subscription_id_param=None, **kwargs):
client_id = kwargs.get('client_id', None)
credentials = MSIAuthentication(client_id=client_id)
subscription_id = subscription_id_param or os.environ.get(AZURE_CREDENTIAL_ENV_MAPPING['subscription_id'], None)
if not subscription_id:
try:
# use the first subscription of the MSI
subscription_client = SubscriptionClient(credentials)
subscription = next(subscription_client.subscriptions.list())
subscription_id = str(subscription.subscription_id)
except Exception as exc:
self.fail("Failed to get MSI token: {0}. "
"Please check whether your machine enabled MSI or grant access to any subscription.".format(str(exc)))
return {
'credentials': credentials,
'subscription_id': subscription_id
}
def _get_azure_cli_credentials(self):
credentials, subscription_id = get_azure_cli_credentials()
cloud_environment = get_cli_active_cloud()
cli_credentials = {
'credentials': credentials,
'subscription_id': subscription_id,
'cloud_environment': cloud_environment
}
return cli_credentials
def _get_env_credentials(self):
env_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
env_credentials[attribute] = os.environ.get(env_variable, None)
if env_credentials['profile']:
credentials = self._get_profile(env_credentials['profile'])
return credentials
if env_credentials.get('subscription_id') is not None:
return env_credentials
return None
# TODO: use explicit kwargs instead of intermediate dict
def _get_credentials(self, params):
# Get authentication credentials.
self.log('Getting credentials')
arg_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
arg_credentials[attribute] = params.get(attribute, None)
auth_source = params.get('auth_source', None)
if not auth_source:
auth_source = os.environ.get('ANSIBLE_AZURE_AUTH_SOURCE', 'auto')
if auth_source == 'msi':
self.log('Retrieving credenitals from MSI')
return self._get_msi_credentials(arg_credentials['subscription_id'], client_id=params.get('client_id', None))
if auth_source == 'cli':
if not HAS_AZURE_CLI_CORE:
self.fail(msg=missing_required_lib('azure-cli', reason='for `cli` auth_source'),
exception=HAS_AZURE_CLI_CORE_EXC)
try:
self.log('Retrieving credentials from Azure CLI profile')
cli_credentials = self._get_azure_cli_credentials()
return cli_credentials
except CLIError as err:
self.fail("Azure CLI profile cannot be loaded - {0}".format(err))
if auth_source == 'env':
self.log('Retrieving credentials from environment')
env_credentials = self._get_env_credentials()
return env_credentials
if auth_source == 'credential_file':
self.log("Retrieving credentials from credential file")
profile = params.get('profile') or 'default'
default_credentials = self._get_profile(profile)
return default_credentials
# auto, precedence: module parameters -> environment variables -> default profile in ~/.azure/credentials
# try module params
if arg_credentials['profile'] is not None:
self.log('Retrieving credentials with profile parameter.')
credentials = self._get_profile(arg_credentials['profile'])
return credentials
if arg_credentials['subscription_id']:
self.log('Received credentials from parameters.')
return arg_credentials
# try environment
env_credentials = self._get_env_credentials()
if env_credentials:
self.log('Received credentials from env.')
return env_credentials
# try default profile from ~./azure/credentials
default_credentials = self._get_profile()
if default_credentials:
self.log('Retrieved default profile credentials from ~/.azure/credentials.')
return default_credentials
try:
if HAS_AZURE_CLI_CORE:
self.log('Retrieving credentials from AzureCLI profile')
cli_credentials = self._get_azure_cli_credentials()
return cli_credentials
except CLIError as ce:
self.log('Error getting AzureCLI profile credentials - {0}'.format(ce))
return None
def acquire_token_with_username_password(self, authority, resource, username, password, client_id, tenant):
authority_uri = authority
if tenant is not None:
authority_uri = authority + '/' + tenant
context = AuthenticationContext(authority_uri)
token_response = context.acquire_token_with_username_password(resource, username, password, client_id)
return AADTokenCredentials(token_response)
def log(self, msg, pretty_print=False):
pass
# Use only during module development
# if self.debug:
# log_file = open('azure_rm.log', 'a')
# if pretty_print:
# log_file.write(json.dumps(msg, indent=4, sort_keys=True))
# else:
# log_file.write(msg + u'\n')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,471 |
lib/ansible/module_utils/basic.py contains deprecation which was supposed to be removed for 2.9
|
##### SUMMARY
See https://github.com/ansible/ansible/blob/88d8cf8197c53edd3bcdcd21429eb4c2bfbf0f6a/lib/ansible/module_utils/basic.py#L699-L706
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/65471
|
https://github.com/ansible/ansible/pull/65745
|
0b503f6057b5e60d84a3ee7fe11914eeacc05656
|
c58d8ed1f5f7f47f2a1d8069e04452353c052824
| 2019-12-03T18:53:41Z |
python
| 2020-01-21T21:58:26Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
has_journal = True
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
handle_aliases,
list_deprecations,
list_no_log_values,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(),
group=dict(),
seuser=dict(),
serole=dict(),
selevel=dict(),
setype=dict(),
attributes=dict(aliases=['attr']),
# The following are not about perms and should not be in a rewritten file_common_args
src=dict(), # Maybe dest or path would be appropriate but src is not
follow=dict(type='bool', default=False), # Maybe follow is appropriate because it determines whether to follow symlinks for permission purposes too
force=dict(type='bool'),
# not taken by the file module, but other action plugins call the file module so this ignores
# them for now. In the future, the caller should take care of removing these from the module
# arguments before calling the file module.
content=dict(no_log=True), # used by copy
backup=dict(), # Used by a few modules to create a remote backup before updating the file
remote_src=dict(), # used by assemble
regexp=dict(), # used by assemble
delimiter=dict(), # used by assemble
directory_mode=dict(), # used by copy
unsafe_writes=dict(type='bool'), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read module documentation and install in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
check_invalid_arguments=None, mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False, supports_check_mode=False,
required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
# Check whether code set this explicitly for deprecation purposes
if check_invalid_arguments is None:
check_invalid_arguments = True
module_set_check_invalid_arguments = False
else:
module_set_check_invalid_arguments = True
self.check_invalid_arguments = check_invalid_arguments
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._warnings = []
self._deprecations = []
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments(check_invalid_arguments)
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
# Do this at the end so that logging parameters have been set up
# This is to warn third party module authors that the functionality is going away.
# We exclude uri and zfs as they have their own deprecation warnings for users and we'll
# make sure to update their code to stop using check_invalid_arguments when 2.9 rolls around
if module_set_check_invalid_arguments and self._name not in ('uri', 'zfs'):
self.deprecate('Setting check_invalid_arguments is deprecated and will be removed.'
' Update the code for this module In the future, AnsibleModule will'
' always check for invalid arguments.', version='2.9')
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
if isinstance(warning, string_types):
self._warnings.append(warning)
self.log('[WARNING] %s' % warning)
else:
raise TypeError("warn requires a string not a %s" % type(warning))
def deprecate(self, msg, version=None):
if isinstance(msg, string_types):
self._deprecations.append({
'msg': msg,
'version': version
})
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
else:
raise TypeError("deprecate requires a string not a %s" % type(msg))
def load_file_common_arguments(self, params):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
'''
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (errno.EPERM, errno.EROFS): # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
self._warnings.append('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
self._deprecations.append(
{'msg': "Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
'version': deprecation['version']})
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
self._deprecations.extend(list_deprecations(spec, param))
def _check_arguments(self, check_invalid_arguments, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for k in list(param.keys()):
if check_invalid_arguments and k not in legal_inputs:
unsupported_parameters.add(k)
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in param:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(param[param_key]))
else:
setattr(self, PASS_VARS[k][0], param[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (', '.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value {0!r} (type {0.__class__.__name__}) in a string field was converted to {1!r} (type string). '
'If this does not look like what you expect, {2}').format(value, to_text(value), common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(self.check_invalid_arguments, spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
for value in values:
try:
validated_params.append(type_checker(value))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
try:
param[k] = type_checker(value)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg, required, opt_dirs)
except ValueError as e:
self.fail_json(msg=to_text(e))
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
if self._warnings:
kwargs['warnings'] = self._warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version', None))
else:
self.deprecate(d)
else:
self.deprecate(kwargs['deprecations'])
if self._deprecations:
kwargs['deprecations'] = self._deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, **kwargs):
''' return from the module, with an error message '''
if 'msg' not in kwargs:
raise AssertionError("implementation error -- msg to explain the error is required")
kwargs['failed'] = True
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _read_from_pipes(self, rpipes, rfds, file_descriptor):
data = b('')
if file_descriptor in rfds:
data = os.read(file_descriptor.fileno(), self.get_buffer_size(file_descriptor))
if data == b(''):
rpipes.remove(file_descriptor)
return data
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b('')
stderr = b('')
rpipes = [cmd.stdout, cmd.stderr]
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
rfds, wfds, efds = select.select(rpipes, [], rpipes, 1)
stdout += self._read_from_pipes(rpipes, rfds, cmd.stdout)
stderr += self._read_from_pipes(rpipes, rfds, cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not rpipes or not rfds) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if rpipes is empty
elif not rpipes and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,471 |
lib/ansible/module_utils/basic.py contains deprecation which was supposed to be removed for 2.9
|
##### SUMMARY
See https://github.com/ansible/ansible/blob/88d8cf8197c53edd3bcdcd21429eb4c2bfbf0f6a/lib/ansible/module_utils/basic.py#L699-L706
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/65471
|
https://github.com/ansible/ansible/pull/65745
|
0b503f6057b5e60d84a3ee7fe11914eeacc05656
|
c58d8ed1f5f7f47f2a1d8069e04452353c052824
| 2019-12-03T18:53:41Z |
python
| 2020-01-21T21:58:26Z |
lib/ansible/module_utils/utm_utils.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright: (c) 2018, Johannes Brunswicker <[email protected]>
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_url
class UTMModuleConfigurationError(Exception):
def __init__(self, msg, **args):
super(UTMModuleConfigurationError, self).__init__(self, msg)
self.msg = msg
self.module_fail_args = args
def do_fail(self, module):
module.fail_json(msg=self.msg, other=self.module_fail_args)
class UTMModule(AnsibleModule):
"""
This is a helper class to construct any UTM Module. This will automatically add the utm host, port, token,
protocol, validate_certs and state field to the module. If you want to implement your own sophos utm module
just initialize this UTMModule class and define the Payload fields that are needed for your module.
See the other modules like utm_aaa_group for example.
"""
def __init__(self, argument_spec, bypass_checks=False, no_log=False, check_invalid_arguments=None,
mutually_exclusive=None, required_together=None, required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None):
default_specs = dict(
headers=dict(type='dict', required=False, default={}),
utm_host=dict(type='str', required=True),
utm_port=dict(type='int', default=4444),
utm_token=dict(type='str', required=True, no_log=True),
utm_protocol=dict(type='str', required=False, default="https", choices=["https", "http"]),
validate_certs=dict(type='bool', required=False, default=True),
state=dict(default='present', choices=['present', 'absent'])
)
super(UTMModule, self).__init__(self._merge_specs(default_specs, argument_spec), bypass_checks, no_log,
check_invalid_arguments, mutually_exclusive, required_together, required_one_of,
add_file_common_args, supports_check_mode, required_if)
def _merge_specs(self, default_specs, custom_specs):
result = default_specs.copy()
result.update(custom_specs)
return result
class UTM:
def __init__(self, module, endpoint, change_relevant_keys, info_only=False):
"""
Initialize UTM Class
:param module: The Ansible module
:param endpoint: The corresponding endpoint to the module
:param change_relevant_keys: The keys of the object to check for changes
:param info_only: When implementing an info module, set this to true. Will allow access to the info method only
"""
self.info_only = info_only
self.module = module
self.request_url = module.params.get('utm_protocol') + "://" + module.params.get('utm_host') + ":" + to_native(
module.params.get('utm_port')) + "/api/objects/" + endpoint + "/"
"""
The change_relevant_keys will be checked for changes to determine whether the object needs to be updated
"""
self.change_relevant_keys = change_relevant_keys
self.module.params['url_username'] = 'token'
self.module.params['url_password'] = module.params.get('utm_token')
if all(elem in self.change_relevant_keys for elem in module.params.keys()):
raise UTMModuleConfigurationError(
"The keys " + to_native(
self.change_relevant_keys) + " to check are not in the modules keys:\n" + to_native(
module.params.keys()))
def execute(self):
try:
if not self.info_only:
if self.module.params.get('state') == 'present':
self._add()
elif self.module.params.get('state') == 'absent':
self._remove()
else:
self._info()
except Exception as e:
self.module.fail_json(msg=to_native(e))
def _info(self):
"""
returns the info for an object in utm
"""
info, result = self._lookup_entry(self.module, self.request_url)
if info["status"] >= 400:
self.module.fail_json(result=json.loads(info))
else:
if result is None:
self.module.exit_json(changed=False)
else:
self.module.exit_json(result=result, changed=False)
def _add(self):
"""
adds or updates a host object on utm
"""
combined_headers = self._combine_headers()
is_changed = False
info, result = self._lookup_entry(self.module, self.request_url)
if info["status"] >= 400:
self.module.fail_json(result=json.loads(info))
else:
data_as_json_string = self.module.jsonify(self.module.params)
if result is None:
response, info = fetch_url(self.module, self.request_url, method="POST",
headers=combined_headers,
data=data_as_json_string)
if info["status"] >= 400:
self.module.fail_json(msg=json.loads(info["body"]))
is_changed = True
result = self._clean_result(json.loads(response.read()))
else:
if self._is_object_changed(self.change_relevant_keys, self.module, result):
response, info = fetch_url(self.module, self.request_url + result['_ref'], method="PUT",
headers=combined_headers,
data=data_as_json_string)
if info['status'] >= 400:
self.module.fail_json(msg=json.loads(info["body"]))
is_changed = True
result = self._clean_result(json.loads(response.read()))
self.module.exit_json(result=result, changed=is_changed)
def _combine_headers(self):
"""
This will combine a header default with headers that come from the module declaration
:return: A combined headers dict
"""
default_headers = {"Accept": "application/json", "Content-type": "application/json"}
if self.module.params.get('headers') is not None:
result = default_headers.copy()
result.update(self.module.params.get('headers'))
else:
result = default_headers
return result
def _remove(self):
"""
removes an object from utm
"""
is_changed = False
info, result = self._lookup_entry(self.module, self.request_url)
if result is not None:
response, info = fetch_url(self.module, self.request_url + result['_ref'], method="DELETE",
headers={"Accept": "application/json", "X-Restd-Err-Ack": "all"},
data=self.module.jsonify(self.module.params))
if info["status"] >= 400:
self.module.fail_json(msg=json.loads(info["body"]))
else:
is_changed = True
self.module.exit_json(changed=is_changed)
def _lookup_entry(self, module, request_url):
"""
Lookup for existing entry
:param module:
:param request_url:
:return:
"""
response, info = fetch_url(module, request_url, method="GET", headers={"Accept": "application/json"})
result = None
if response is not None:
results = json.loads(response.read())
result = next(iter(filter(lambda d: d['name'] == module.params.get('name'), results)), None)
return info, result
def _clean_result(self, result):
"""
Will clean the result from irrelevant fields
:param result: The result from the query
:return: The modified result
"""
del result['utm_host']
del result['utm_port']
del result['utm_token']
del result['utm_protocol']
del result['validate_certs']
del result['url_username']
del result['url_password']
del result['state']
return result
def _is_object_changed(self, keys, module, result):
"""
Check if my object is changed
:param keys: The keys that will determine if an object is changed
:param module: The module
:param result: The result from the query
:return:
"""
for key in keys:
if module.params.get(key) != result[key]:
return True
return False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,471 |
lib/ansible/module_utils/basic.py contains deprecation which was supposed to be removed for 2.9
|
##### SUMMARY
See https://github.com/ansible/ansible/blob/88d8cf8197c53edd3bcdcd21429eb4c2bfbf0f6a/lib/ansible/module_utils/basic.py#L699-L706
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/65471
|
https://github.com/ansible/ansible/pull/65745
|
0b503f6057b5e60d84a3ee7fe11914eeacc05656
|
c58d8ed1f5f7f47f2a1d8069e04452353c052824
| 2019-12-03T18:53:41Z |
python
| 2020-01-21T21:58:26Z |
test/lib/ansible_test/_data/sanity/pylint/plugins/deprecated.py
|
# (c) 2018, Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from distutils.version import LooseVersion
import astroid
from pylint.interfaces import IAstroidChecker
from pylint.checkers import BaseChecker
from pylint.checkers.utils import check_messages
from ansible.release import __version__ as ansible_version_raw
MSGS = {
'E9501': ("Deprecated version (%r) found in call to Display.deprecated "
"or AnsibleModule.deprecate",
"ansible-deprecated-version",
"Used when a call to Display.deprecated specifies a version "
"less than or equal to the current version of Ansible",
{'minversion': (2, 6)}),
'E9502': ("Display.deprecated call without a version",
"ansible-deprecated-no-version",
"Used when a call to Display.deprecated does not specify a "
"version",
{'minversion': (2, 6)}),
'E9503': ("Invalid deprecated version (%r) found in call to "
"Display.deprecated or AnsibleModule.deprecate",
"ansible-invalid-deprecated-version",
"Used when a call to Display.deprecated specifies an invalid "
"version number",
{'minversion': (2, 6)}),
}
ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version_raw.split('.')[:3]))
def _get_expr_name(node):
"""Funciton to get either ``attrname`` or ``name`` from ``node.func.expr``
Created specifically for the case of ``display.deprecated`` or ``self._display.deprecated``
"""
try:
return node.func.expr.attrname
except AttributeError:
# If this fails too, we'll let it raise, the caller should catch it
return node.func.expr.name
class AnsibleDeprecatedChecker(BaseChecker):
"""Checks for Display.deprecated calls to ensure that the ``version``
has not passed or met the time for removal
"""
__implements__ = (IAstroidChecker,)
name = 'deprecated'
msgs = MSGS
@check_messages(*(MSGS.keys()))
def visit_call(self, node):
version = None
try:
if (node.func.attrname == 'deprecated' and 'display' in _get_expr_name(node) or
node.func.attrname == 'deprecate' and 'module' in _get_expr_name(node)):
if node.keywords:
for keyword in node.keywords:
if len(node.keywords) == 1 and keyword.arg is None:
# This is likely a **kwargs splat
return
if keyword.arg == 'version':
if isinstance(keyword.value.value, astroid.Name):
# This is likely a variable
return
version = keyword.value.value
if not version:
try:
version = node.args[1].value
except IndexError:
self.add_message('ansible-deprecated-no-version', node=node)
return
try:
if ANSIBLE_VERSION >= LooseVersion(str(version)):
self.add_message('ansible-deprecated-version', node=node, args=(version,))
except ValueError:
self.add_message('ansible-invalid-deprecated-version', node=node, args=(version,))
except AttributeError:
# Not the type of node we are interested in
pass
def register(linter):
"""required method to auto register this checker """
linter.register_checker(AnsibleDeprecatedChecker(linter))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,471 |
lib/ansible/module_utils/basic.py contains deprecation which was supposed to be removed for 2.9
|
##### SUMMARY
See https://github.com/ansible/ansible/blob/88d8cf8197c53edd3bcdcd21429eb4c2bfbf0f6a/lib/ansible/module_utils/basic.py#L699-L706
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```paste below
2.9
devel
```
|
https://github.com/ansible/ansible/issues/65471
|
https://github.com/ansible/ansible/pull/65745
|
0b503f6057b5e60d84a3ee7fe11914eeacc05656
|
c58d8ed1f5f7f47f2a1d8069e04452353c052824
| 2019-12-03T18:53:41Z |
python
| 2020-01-21T21:58:26Z |
test/sanity/ignore.txt
|
contrib/inventory/abiquo.py future-import-boilerplate
contrib/inventory/abiquo.py metaclass-boilerplate
contrib/inventory/apache-libcloud.py future-import-boilerplate
contrib/inventory/apache-libcloud.py metaclass-boilerplate
contrib/inventory/apstra_aos.py future-import-boilerplate
contrib/inventory/apstra_aos.py metaclass-boilerplate
contrib/inventory/azure_rm.py future-import-boilerplate
contrib/inventory/azure_rm.py metaclass-boilerplate
contrib/inventory/brook.py future-import-boilerplate
contrib/inventory/brook.py metaclass-boilerplate
contrib/inventory/cloudforms.py future-import-boilerplate
contrib/inventory/cloudforms.py metaclass-boilerplate
contrib/inventory/cobbler.py future-import-boilerplate
contrib/inventory/cobbler.py metaclass-boilerplate
contrib/inventory/collins.py future-import-boilerplate
contrib/inventory/collins.py metaclass-boilerplate
contrib/inventory/consul_io.py future-import-boilerplate
contrib/inventory/consul_io.py metaclass-boilerplate
contrib/inventory/digital_ocean.py future-import-boilerplate
contrib/inventory/digital_ocean.py metaclass-boilerplate
contrib/inventory/ec2.py future-import-boilerplate
contrib/inventory/ec2.py metaclass-boilerplate
contrib/inventory/fleet.py future-import-boilerplate
contrib/inventory/fleet.py metaclass-boilerplate
contrib/inventory/foreman.py future-import-boilerplate
contrib/inventory/foreman.py metaclass-boilerplate
contrib/inventory/freeipa.py future-import-boilerplate
contrib/inventory/freeipa.py metaclass-boilerplate
contrib/inventory/gce.py future-import-boilerplate
contrib/inventory/gce.py metaclass-boilerplate
contrib/inventory/gce.py pylint:blacklisted-name
contrib/inventory/infoblox.py future-import-boilerplate
contrib/inventory/infoblox.py metaclass-boilerplate
contrib/inventory/jail.py future-import-boilerplate
contrib/inventory/jail.py metaclass-boilerplate
contrib/inventory/landscape.py future-import-boilerplate
contrib/inventory/landscape.py metaclass-boilerplate
contrib/inventory/libvirt_lxc.py future-import-boilerplate
contrib/inventory/libvirt_lxc.py metaclass-boilerplate
contrib/inventory/linode.py future-import-boilerplate
contrib/inventory/linode.py metaclass-boilerplate
contrib/inventory/lxc_inventory.py future-import-boilerplate
contrib/inventory/lxc_inventory.py metaclass-boilerplate
contrib/inventory/lxd.py future-import-boilerplate
contrib/inventory/lxd.py metaclass-boilerplate
contrib/inventory/mdt_dynamic_inventory.py future-import-boilerplate
contrib/inventory/mdt_dynamic_inventory.py metaclass-boilerplate
contrib/inventory/nagios_livestatus.py future-import-boilerplate
contrib/inventory/nagios_livestatus.py metaclass-boilerplate
contrib/inventory/nagios_ndo.py future-import-boilerplate
contrib/inventory/nagios_ndo.py metaclass-boilerplate
contrib/inventory/nsot.py future-import-boilerplate
contrib/inventory/nsot.py metaclass-boilerplate
contrib/inventory/openshift.py future-import-boilerplate
contrib/inventory/openshift.py metaclass-boilerplate
contrib/inventory/openstack_inventory.py future-import-boilerplate
contrib/inventory/openstack_inventory.py metaclass-boilerplate
contrib/inventory/openvz.py future-import-boilerplate
contrib/inventory/openvz.py metaclass-boilerplate
contrib/inventory/ovirt.py future-import-boilerplate
contrib/inventory/ovirt.py metaclass-boilerplate
contrib/inventory/ovirt4.py future-import-boilerplate
contrib/inventory/ovirt4.py metaclass-boilerplate
contrib/inventory/packet_net.py future-import-boilerplate
contrib/inventory/packet_net.py metaclass-boilerplate
contrib/inventory/proxmox.py future-import-boilerplate
contrib/inventory/proxmox.py metaclass-boilerplate
contrib/inventory/rackhd.py future-import-boilerplate
contrib/inventory/rackhd.py metaclass-boilerplate
contrib/inventory/rax.py future-import-boilerplate
contrib/inventory/rax.py metaclass-boilerplate
contrib/inventory/rudder.py future-import-boilerplate
contrib/inventory/rudder.py metaclass-boilerplate
contrib/inventory/scaleway.py future-import-boilerplate
contrib/inventory/scaleway.py metaclass-boilerplate
contrib/inventory/serf.py future-import-boilerplate
contrib/inventory/serf.py metaclass-boilerplate
contrib/inventory/softlayer.py future-import-boilerplate
contrib/inventory/softlayer.py metaclass-boilerplate
contrib/inventory/spacewalk.py future-import-boilerplate
contrib/inventory/spacewalk.py metaclass-boilerplate
contrib/inventory/ssh_config.py future-import-boilerplate
contrib/inventory/ssh_config.py metaclass-boilerplate
contrib/inventory/stacki.py future-import-boilerplate
contrib/inventory/stacki.py metaclass-boilerplate
contrib/inventory/vagrant.py future-import-boilerplate
contrib/inventory/vagrant.py metaclass-boilerplate
contrib/inventory/vbox.py future-import-boilerplate
contrib/inventory/vbox.py metaclass-boilerplate
contrib/inventory/vmware.py future-import-boilerplate
contrib/inventory/vmware.py metaclass-boilerplate
contrib/inventory/vmware_inventory.py future-import-boilerplate
contrib/inventory/vmware_inventory.py metaclass-boilerplate
contrib/inventory/zabbix.py future-import-boilerplate
contrib/inventory/zabbix.py metaclass-boilerplate
contrib/inventory/zone.py future-import-boilerplate
contrib/inventory/zone.py metaclass-boilerplate
contrib/vault/azure_vault.py future-import-boilerplate
contrib/vault/azure_vault.py metaclass-boilerplate
contrib/vault/vault-keyring-client.py future-import-boilerplate
contrib/vault/vault-keyring-client.py metaclass-boilerplate
contrib/vault/vault-keyring.py future-import-boilerplate
contrib/vault/vault-keyring.py metaclass-boilerplate
docs/bin/find-plugin-refs.py future-import-boilerplate
docs/bin/find-plugin-refs.py metaclass-boilerplate
docs/docsite/_extensions/pygments_lexer.py future-import-boilerplate
docs/docsite/_extensions/pygments_lexer.py metaclass-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py future-import-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py metaclass-boilerplate
docs/docsite/rst/conf.py future-import-boilerplate
docs/docsite/rst/conf.py metaclass-boilerplate
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
examples/scripts/uptime.py future-import-boilerplate
examples/scripts/uptime.py metaclass-boilerplate
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/fix_test_syntax.py future-import-boilerplate
hacking/fix_test_syntax.py metaclass-boilerplate
hacking/get_library.py future-import-boilerplate
hacking/get_library.py metaclass-boilerplate
hacking/report.py future-import-boilerplate
hacking/report.py metaclass-boilerplate
hacking/return_skeleton_generator.py future-import-boilerplate
hacking/return_skeleton_generator.py metaclass-boilerplate
hacking/test-module.py future-import-boilerplate
hacking/test-module.py metaclass-boilerplate
hacking/tests/gen_distribution_version_testcase.py future-import-boilerplate
hacking/tests/gen_distribution_version_testcase.py metaclass-boilerplate
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/compat/selectors/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/compat/selectors/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/compat/selectors/_selectors2.py pylint:blacklisted-name
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/config/module_defaults.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/module_utils/_text.py future-import-boilerplate
lib/ansible/module_utils/_text.py metaclass-boilerplate
lib/ansible/module_utils/alicloud_ecs.py future-import-boilerplate
lib/ansible/module_utils/alicloud_ecs.py metaclass-boilerplate
lib/ansible/module_utils/ansible_tower.py future-import-boilerplate
lib/ansible/module_utils/ansible_tower.py metaclass-boilerplate
lib/ansible/module_utils/api.py future-import-boilerplate
lib/ansible/module_utils/api.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common_ext.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common_ext.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common_rest.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common_rest.py metaclass-boilerplate
lib/ansible/module_utils/basic.py metaclass-boilerplate
lib/ansible/module_utils/cloud.py future-import-boilerplate
lib/ansible/module_utils/cloud.py metaclass-boilerplate
lib/ansible/module_utils/common/network.py future-import-boilerplate
lib/ansible/module_utils/common/network.py metaclass-boilerplate
lib/ansible/module_utils/compat/ipaddress.py future-import-boilerplate
lib/ansible/module_utils/compat/ipaddress.py metaclass-boilerplate
lib/ansible/module_utils/compat/ipaddress.py no-assert
lib/ansible/module_utils/compat/ipaddress.py no-unicode-literals
lib/ansible/module_utils/connection.py future-import-boilerplate
lib/ansible/module_utils/connection.py metaclass-boilerplate
lib/ansible/module_utils/database.py future-import-boilerplate
lib/ansible/module_utils/database.py metaclass-boilerplate
lib/ansible/module_utils/digital_ocean.py future-import-boilerplate
lib/ansible/module_utils/digital_ocean.py metaclass-boilerplate
lib/ansible/module_utils/dimensiondata.py future-import-boilerplate
lib/ansible/module_utils/dimensiondata.py metaclass-boilerplate
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/f5_utils.py future-import-boilerplate
lib/ansible/module_utils/f5_utils.py metaclass-boilerplate
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/facts/sysctl.py future-import-boilerplate
lib/ansible/module_utils/facts/sysctl.py metaclass-boilerplate
lib/ansible/module_utils/facts/system/distribution.py pylint:ansible-bad-function
lib/ansible/module_utils/facts/utils.py future-import-boilerplate
lib/ansible/module_utils/facts/utils.py metaclass-boilerplate
lib/ansible/module_utils/firewalld.py future-import-boilerplate
lib/ansible/module_utils/firewalld.py metaclass-boilerplate
lib/ansible/module_utils/gcdns.py future-import-boilerplate
lib/ansible/module_utils/gcdns.py metaclass-boilerplate
lib/ansible/module_utils/gce.py future-import-boilerplate
lib/ansible/module_utils/gce.py metaclass-boilerplate
lib/ansible/module_utils/gcp.py future-import-boilerplate
lib/ansible/module_utils/gcp.py metaclass-boilerplate
lib/ansible/module_utils/gcp_utils.py future-import-boilerplate
lib/ansible/module_utils/gcp_utils.py metaclass-boilerplate
lib/ansible/module_utils/gitlab.py future-import-boilerplate
lib/ansible/module_utils/gitlab.py metaclass-boilerplate
lib/ansible/module_utils/hwc_utils.py future-import-boilerplate
lib/ansible/module_utils/hwc_utils.py metaclass-boilerplate
lib/ansible/module_utils/infinibox.py future-import-boilerplate
lib/ansible/module_utils/infinibox.py metaclass-boilerplate
lib/ansible/module_utils/ipa.py future-import-boilerplate
lib/ansible/module_utils/ipa.py metaclass-boilerplate
lib/ansible/module_utils/ismount.py future-import-boilerplate
lib/ansible/module_utils/ismount.py metaclass-boilerplate
lib/ansible/module_utils/json_utils.py future-import-boilerplate
lib/ansible/module_utils/json_utils.py metaclass-boilerplate
lib/ansible/module_utils/k8s/common.py metaclass-boilerplate
lib/ansible/module_utils/k8s/raw.py metaclass-boilerplate
lib/ansible/module_utils/k8s/scale.py metaclass-boilerplate
lib/ansible/module_utils/known_hosts.py future-import-boilerplate
lib/ansible/module_utils/known_hosts.py metaclass-boilerplate
lib/ansible/module_utils/kubevirt.py future-import-boilerplate
lib/ansible/module_utils/kubevirt.py metaclass-boilerplate
lib/ansible/module_utils/linode.py future-import-boilerplate
lib/ansible/module_utils/linode.py metaclass-boilerplate
lib/ansible/module_utils/lxd.py future-import-boilerplate
lib/ansible/module_utils/lxd.py metaclass-boilerplate
lib/ansible/module_utils/manageiq.py future-import-boilerplate
lib/ansible/module_utils/manageiq.py metaclass-boilerplate
lib/ansible/module_utils/memset.py future-import-boilerplate
lib/ansible/module_utils/memset.py metaclass-boilerplate
lib/ansible/module_utils/mysql.py future-import-boilerplate
lib/ansible/module_utils/mysql.py metaclass-boilerplate
lib/ansible/module_utils/net_tools/netbox/netbox_utils.py future-import-boilerplate
lib/ansible/module_utils/net_tools/nios/api.py future-import-boilerplate
lib/ansible/module_utils/net_tools/nios/api.py metaclass-boilerplate
lib/ansible/module_utils/netapp.py future-import-boilerplate
lib/ansible/module_utils/netapp.py metaclass-boilerplate
lib/ansible/module_utils/netapp_elementsw_module.py future-import-boilerplate
lib/ansible/module_utils/netapp_elementsw_module.py metaclass-boilerplate
lib/ansible/module_utils/netapp_module.py future-import-boilerplate
lib/ansible/module_utils/netapp_module.py metaclass-boilerplate
lib/ansible/module_utils/network/a10/a10.py future-import-boilerplate
lib/ansible/module_utils/network/a10/a10.py metaclass-boilerplate
lib/ansible/module_utils/network/aireos/aireos.py future-import-boilerplate
lib/ansible/module_utils/network/aireos/aireos.py metaclass-boilerplate
lib/ansible/module_utils/network/aos/aos.py future-import-boilerplate
lib/ansible/module_utils/network/aos/aos.py metaclass-boilerplate
lib/ansible/module_utils/network/aruba/aruba.py future-import-boilerplate
lib/ansible/module_utils/network/aruba/aruba.py metaclass-boilerplate
lib/ansible/module_utils/network/asa/asa.py future-import-boilerplate
lib/ansible/module_utils/network/asa/asa.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/ansible_utils.py future-import-boilerplate
lib/ansible/module_utils/network/avi/ansible_utils.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/avi.py future-import-boilerplate
lib/ansible/module_utils/network/avi/avi.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/avi_api.py future-import-boilerplate
lib/ansible/module_utils/network/avi/avi_api.py metaclass-boilerplate
lib/ansible/module_utils/network/bigswitch/bigswitch.py future-import-boilerplate
lib/ansible/module_utils/network/bigswitch/bigswitch.py metaclass-boilerplate
lib/ansible/module_utils/network/checkpoint/checkpoint.py metaclass-boilerplate
lib/ansible/module_utils/network/cloudengine/ce.py future-import-boilerplate
lib/ansible/module_utils/network/cloudengine/ce.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos_devicerules.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos_devicerules.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos_errorcodes.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos_errorcodes.py metaclass-boilerplate
lib/ansible/module_utils/network/common/cfg/base.py future-import-boilerplate
lib/ansible/module_utils/network/common/cfg/base.py metaclass-boilerplate
lib/ansible/module_utils/network/common/config.py future-import-boilerplate
lib/ansible/module_utils/network/common/config.py metaclass-boilerplate
lib/ansible/module_utils/network/common/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/common/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/common/netconf.py future-import-boilerplate
lib/ansible/module_utils/network/common/netconf.py metaclass-boilerplate
lib/ansible/module_utils/network/common/network.py future-import-boilerplate
lib/ansible/module_utils/network/common/network.py metaclass-boilerplate
lib/ansible/module_utils/network/common/parsing.py future-import-boilerplate
lib/ansible/module_utils/network/common/parsing.py metaclass-boilerplate
lib/ansible/module_utils/network/common/utils.py future-import-boilerplate
lib/ansible/module_utils/network/common/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos10/dellos10.py future-import-boilerplate
lib/ansible/module_utils/network/dellos10/dellos10.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos6/dellos6.py future-import-boilerplate
lib/ansible/module_utils/network/dellos6/dellos6.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos9/dellos9.py future-import-boilerplate
lib/ansible/module_utils/network/dellos9/dellos9.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeos/edgeos.py future-import-boilerplate
lib/ansible/module_utils/network/edgeos/edgeos.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch.py future-import-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py future-import-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py pylint:duplicate-string-formatting-argument
lib/ansible/module_utils/network/enos/enos.py future-import-boilerplate
lib/ansible/module_utils/network/enos/enos.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/eos.py future-import-boilerplate
lib/ansible/module_utils/network/eos/eos.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/exos/exos.py future-import-boilerplate
lib/ansible/module_utils/network/exos/exos.py metaclass-boilerplate
lib/ansible/module_utils/network/fortimanager/common.py future-import-boilerplate
lib/ansible/module_utils/network/fortimanager/common.py metaclass-boilerplate
lib/ansible/module_utils/network/fortimanager/fortimanager.py future-import-boilerplate
lib/ansible/module_utils/network/fortimanager/fortimanager.py metaclass-boilerplate
lib/ansible/module_utils/network/fortios/fortios.py future-import-boilerplate
lib/ansible/module_utils/network/fortios/fortios.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/frr.py future-import-boilerplate
lib/ansible/module_utils/network/frr/frr.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/base.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/base.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/common.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/common.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/configuration.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/configuration.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/device.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/device.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/fdm_swagger_client.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/fdm_swagger_client.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/operation.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/operation.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/ios.py future-import-boilerplate
lib/ansible/module_utils/network/ios/ios.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/base.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/base.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/iosxr.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/iosxr.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/argspec/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/junos/argspec/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/junos/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/facts/legacy/base.py future-import-boilerplate
lib/ansible/module_utils/network/junos/facts/legacy/base.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/junos.py future-import-boilerplate
lib/ansible/module_utils/network/junos/junos.py metaclass-boilerplate
lib/ansible/module_utils/network/meraki/meraki.py future-import-boilerplate
lib/ansible/module_utils/network/meraki/meraki.py metaclass-boilerplate
lib/ansible/module_utils/network/netconf/netconf.py future-import-boilerplate
lib/ansible/module_utils/network/netconf/netconf.py metaclass-boilerplate
lib/ansible/module_utils/network/netscaler/netscaler.py future-import-boilerplate
lib/ansible/module_utils/network/netscaler/netscaler.py metaclass-boilerplate
lib/ansible/module_utils/network/nos/nos.py future-import-boilerplate
lib/ansible/module_utils/network/nos/nos.py metaclass-boilerplate
lib/ansible/module_utils/network/nso/nso.py future-import-boilerplate
lib/ansible/module_utils/network/nso/nso.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/argspec/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/argspec/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/facts/legacy/base.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/facts/legacy/base.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/nxos.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/nxos.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/utils/utils.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/utils/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/onyx/onyx.py future-import-boilerplate
lib/ansible/module_utils/network/onyx/onyx.py metaclass-boilerplate
lib/ansible/module_utils/network/ordnance/ordnance.py future-import-boilerplate
lib/ansible/module_utils/network/ordnance/ordnance.py metaclass-boilerplate
lib/ansible/module_utils/network/restconf/restconf.py future-import-boilerplate
lib/ansible/module_utils/network/restconf/restconf.py metaclass-boilerplate
lib/ansible/module_utils/network/routeros/routeros.py future-import-boilerplate
lib/ansible/module_utils/network/routeros/routeros.py metaclass-boilerplate
lib/ansible/module_utils/network/skydive/api.py future-import-boilerplate
lib/ansible/module_utils/network/skydive/api.py metaclass-boilerplate
lib/ansible/module_utils/network/slxos/slxos.py future-import-boilerplate
lib/ansible/module_utils/network/slxos/slxos.py metaclass-boilerplate
lib/ansible/module_utils/network/sros/sros.py future-import-boilerplate
lib/ansible/module_utils/network/sros/sros.py metaclass-boilerplate
lib/ansible/module_utils/network/voss/voss.py future-import-boilerplate
lib/ansible/module_utils/network/voss/voss.py metaclass-boilerplate
lib/ansible/module_utils/network/vyos/vyos.py future-import-boilerplate
lib/ansible/module_utils/network/vyos/vyos.py metaclass-boilerplate
lib/ansible/module_utils/oneandone.py future-import-boilerplate
lib/ansible/module_utils/oneandone.py metaclass-boilerplate
lib/ansible/module_utils/oneview.py metaclass-boilerplate
lib/ansible/module_utils/opennebula.py future-import-boilerplate
lib/ansible/module_utils/opennebula.py metaclass-boilerplate
lib/ansible/module_utils/openstack.py future-import-boilerplate
lib/ansible/module_utils/openstack.py metaclass-boilerplate
lib/ansible/module_utils/oracle/oci_utils.py future-import-boilerplate
lib/ansible/module_utils/oracle/oci_utils.py metaclass-boilerplate
lib/ansible/module_utils/ovirt.py future-import-boilerplate
lib/ansible/module_utils/ovirt.py metaclass-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py future-import-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py metaclass-boilerplate
lib/ansible/module_utils/postgres.py future-import-boilerplate
lib/ansible/module_utils/postgres.py metaclass-boilerplate
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pure.py future-import-boilerplate
lib/ansible/module_utils/pure.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py future-import-boilerplate
lib/ansible/module_utils/pycompat24.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/rax.py future-import-boilerplate
lib/ansible/module_utils/rax.py metaclass-boilerplate
lib/ansible/module_utils/redhat.py future-import-boilerplate
lib/ansible/module_utils/redhat.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/dellemc/dellemc_idrac.py future-import-boilerplate
lib/ansible/module_utils/remote_management/intersight.py future-import-boilerplate
lib/ansible/module_utils/remote_management/intersight.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/lxca/common.py future-import-boilerplate
lib/ansible/module_utils/remote_management/lxca/common.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/ucs.py future-import-boilerplate
lib/ansible/module_utils/remote_management/ucs.py metaclass-boilerplate
lib/ansible/module_utils/scaleway.py future-import-boilerplate
lib/ansible/module_utils/scaleway.py metaclass-boilerplate
lib/ansible/module_utils/service.py future-import-boilerplate
lib/ansible/module_utils/service.py metaclass-boilerplate
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/splitter.py future-import-boilerplate
lib/ansible/module_utils/splitter.py metaclass-boilerplate
lib/ansible/module_utils/storage/hpe3par/hpe3par.py future-import-boilerplate
lib/ansible/module_utils/storage/hpe3par/hpe3par.py metaclass-boilerplate
lib/ansible/module_utils/univention_umc.py future-import-boilerplate
lib/ansible/module_utils/univention_umc.py metaclass-boilerplate
lib/ansible/module_utils/urls.py future-import-boilerplate
lib/ansible/module_utils/urls.py metaclass-boilerplate
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/module_utils/vca.py future-import-boilerplate
lib/ansible/module_utils/vca.py metaclass-boilerplate
lib/ansible/module_utils/vexata.py future-import-boilerplate
lib/ansible/module_utils/vexata.py metaclass-boilerplate
lib/ansible/module_utils/yumdnf.py future-import-boilerplate
lib/ansible/module_utils/yumdnf.py metaclass-boilerplate
lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/atomic/atomic_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/atomic/atomic_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_acs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_aks_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_aksversion_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_autoscale_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_availabilityset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_availabilityset_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_batchaccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerinstance_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerregistry.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerregistry_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_deployment.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/azure/azure_rm_deployment_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_deployment_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlab.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlab_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabarmtemplate_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifact_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualnetwork.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_dnszone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_functionapp.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_functionapp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_functionapp_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_galleryimage.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_galleryimage_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_iothubconsumergroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_keyvault_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_keyvaultsecret.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_lock_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_manageddisk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_manageddisk_info.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_manageddisk_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_mariadbdatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_mysqldatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_postgresqldatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_rediscache_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_rediscachefirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resource.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resource_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roleassignment.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roleassignment_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:invalid-argument-spec
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_routetable.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_routetable_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_securitygroup_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebus.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebus_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_servicebus_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebusqueue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebussaspolicy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_servicebussaspolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebustopic.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_servicebustopic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebustopicsubscription.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlserver_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_storageblob.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_subnet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_subnet_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineimage_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_webapp_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_aa_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_aa_policy.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_group.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/cloudscale/cloudscale_floating_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_server_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_firewall_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/digital_ocean/digital_ocean_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_load_balancer_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_snapshot_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_volume_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/docker/docker_container.py use-argspec-type-path # uses colon-separated paths, can't use type=path
lib/ansible/modules/cloud/google/_gcdns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcdns_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gce.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gce.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gce.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/google/_gcp_backend_service.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcp_healthcheck.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_eip.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_img.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_img.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_img.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_instance_template.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_lb.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_mig.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_net.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_net.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_net.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_net.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/google/gce_net.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_pd.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_snapshot.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_tag.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_tag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gcp_bigquery_table.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/hcloud/hcloud_network_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/heroku/heroku_collaborator.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/linode/linode.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/linode/linode.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/linode/linode_v4.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/lxc/lxc_container.py pylint:blacklisted-name
lib/ansible/modules/cloud/lxc/lxc_container.py use-argspec-type-path
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_dns_reload.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_memstore_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_server_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_zone_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_zone_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/cloud_init_data_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/helm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/helm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/misc/proxmox.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/proxmox.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/rhevm.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/misc/terraform.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/misc/terraform.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/terraform.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/terraform.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/misc/virt.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/misc/virt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/virt.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/misc/virt_net.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/virt_net.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/online/_online_server_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/_online_server_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/online/_online_user_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/_online_user_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/online/online_server_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/online_server_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/online/online_user_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/online_user_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_auth.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_client_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_floating_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_group_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_image_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_ironic_inspect.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_keypair.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_domain_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_domain_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_keystone_role.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_service.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_member.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_networks_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_networks_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_object.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_pool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_port_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_port_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_project.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_security_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_subnets_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_subnets_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_user.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_user_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_user_role.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_volume_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oracle/oci_vcn.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_auth.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_auth.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_disk.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_group.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_job.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_job.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_network.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_nic.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_permission.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_quota.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_role.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_role.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_tag.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_template.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_user.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vm.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/packet/packet_ip_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/packet/packet_volume_attachment.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/packet/packet_volume_attachment.py pylint:ansible-bad-function
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/podman/podman_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_organization_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_organization_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_security_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_sshkey.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_sshkey.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/smartos/smartos_image_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/univention/udm_group.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/univention/udm_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/_vmware_drs_group_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_cfg_backup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_content_library_manager.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py use-argspec-type-path
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_drs_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_drs_group_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attribute_defs.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_host_datastore.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vm_host_drs_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vsphere_copy.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware_httpapi/vmware_appliance_access_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_appliance_health_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_cis_category_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_core_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vultr/_vultr_block_storage_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_dns_domain_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_firewall_group_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_network_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_os_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_region_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_server_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_ssh_key_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_startup_script_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_user_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vultr/vultr_dns_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_dns_domain_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vultr/vultr_firewall_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_firewall_group_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vultr/vultr_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_network_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_region_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_server_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_startup_script_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/webfaction/webfaction_app.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_db.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/webfaction/webfaction_mailbox.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/xenserver/xenserver_guest_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/xenserver/xenserver_guest_powerstate.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/consul/consul.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/consul/consul.py validate-modules:undocumented-parameter
lib/ansible/modules/clustering/consul/consul_acl.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/consul/consul_acl.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/consul/consul_kv.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/consul/consul_kv.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/etcd3.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/clustering/etcd3.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:return-syntax-error
lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:return-syntax-error
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:return-syntax-error
lib/ansible/modules/clustering/pacemaker_cluster.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/pacemaker_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/znode.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/clustering/znode.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/znode.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/commands/command.py validate-modules:doc-missing-type
lib/ansible/modules/commands/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/commands/command.py validate-modules:undocumented-parameter
lib/ansible/modules/commands/expect.py validate-modules:doc-missing-type
lib/ansible/modules/crypto/acme/acme_account_info.py validate-modules:return-syntax-error
lib/ansible/modules/database/aerospike/aerospike_migrations.py yamllint:unparsable-with-libyaml
lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/redis.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/misc/redis.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/riak.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/misc/riak.py validate-modules:doc-missing-type
lib/ansible/modules/database/misc/riak.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_parameter.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:doc-missing-type
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_replicaset.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_shard.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:doc-missing-type
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_user.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:doc-missing-type
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:undocumented-parameter
lib/ansible/modules/database/mssql/mssql_db.py validate-modules:doc-missing-type
lib/ansible/modules/database/mssql/mssql_db.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/mysql/mysql_db.py validate-modules:use-run-command-not-popen
lib/ansible/modules/database/mysql/mysql_user.py validate-modules:undocumented-parameter
lib/ansible/modules/database/mysql/mysql_variables.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_db.py use-argspec-type-path
lib/ansible/modules/database/postgresql/postgresql_db.py validate-modules:use-run-command-not-popen
lib/ansible/modules/database/postgresql/postgresql_idx.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_lang.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_membership.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_owner.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_pg_hba.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/postgresql/postgresql_privs.py validate-modules:parameter-documented-multiple-times
lib/ansible/modules/database/postgresql/postgresql_set.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_slot.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_subscription.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_tablespace.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_tablespace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/postgresql/postgresql_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:undocumented-parameter
lib/ansible/modules/database/vertica/vertica_configuration.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_configuration.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/vertica/vertica_info.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_role.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_role.py validate-modules:undocumented-parameter
lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:undocumented-parameter
lib/ansible/modules/database/vertica/vertica_user.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_user.py validate-modules:undocumented-parameter
lib/ansible/modules/files/acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/archive.py use-argspec-type-path # fix needed
lib/ansible/modules/files/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/files/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/copy.py pylint:blacklisted-name
lib/ansible/modules/files/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/files/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/file.py pylint:ansible-bad-function
lib/ansible/modules/files/file.py validate-modules:undocumented-parameter
lib/ansible/modules/files/find.py use-argspec-type-path # fix needed
lib/ansible/modules/files/find.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/iso_extract.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/files/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/patch.py pylint:blacklisted-name
lib/ansible/modules/files/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/stat.py validate-modules:parameter-invalid
lib/ansible/modules/files/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/files/synchronize.py pylint:blacklisted-name
lib/ansible/modules/files/synchronize.py use-argspec-type-path
lib/ansible/modules/files/synchronize.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/synchronize.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/synchronize.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/synchronize.py validate-modules:undocumented-parameter
lib/ansible/modules/files/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/xml.py validate-modules:doc-required-mismatch
lib/ansible/modules/identity/cyberark/cyberark_authentication.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-missing-type
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-missing-type
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:doc-missing-type
lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_parameter.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost_limits.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/bigpanda.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/bigpanda.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/bigpanda.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/grafana/grafana_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/grafana/grafana_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/honeybadger_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/icinga2_feature.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/logentries.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/logentries.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logentries.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/logentries.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/logicmonitor.py yamllint:unparsable-with-libyaml
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/monit.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/monit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/nagios.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/nagios.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/nagios.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/nagios.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/monitoring/newrelic_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/pagerduty.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/pagerduty.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/pagerduty.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/pagerduty_alert.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/pagerduty_alert.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/pingdom.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/pingdom.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/rollbar_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_subscription.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/stackdriver.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/stackdriver.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/statusio_maintenance.py pylint:blacklisted-name
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/uptimerobot.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:missing-suboption-docs
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/basics/get_url.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/basics/uri.py pylint:blacklisted-name
lib/ansible/modules/net_tools/basics/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/basics/uri.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/cloudflare_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/ip_netns.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/ldap/ldap_passwd.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ldap/ldap_passwd.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_site.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netcup_dns.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netcup_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:parameter-alias-self
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:parameter-alias-self
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:parameter-alias-self
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nmcli.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nsupdate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_service_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aci/aci_aaa_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_aaa_user_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_access_port_block_to_access_port.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_access_port_to_interface_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_access_sub_port_block_to_access_port.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_aep.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_aep_to_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_ap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_bd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_bd_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_bd_to_l3out.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_config_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_config_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_contract.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_contract_subject.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_contract_subject_to_filter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_domain_to_encap_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_domain_to_vlan_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_encap_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_encap_pool_range.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg_monitoring_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg_to_contract.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg_to_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_fabric_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_fabric_scheduler.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_fabric_scheduler.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_filter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_filter_entry.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_group.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_firmware_group_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_group_node.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_firmware_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_policy.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_firmware_source.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_cdp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_fc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_l2.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_leaf_policy_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_mcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_port_channel.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_port_security.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_selector_to_switch_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out_extepg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out_extsubnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out_route_tag_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_maintenance_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_maintenance_group_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_maintenance_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_rest.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_static_binding_to_epg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_switch_leaf_selector.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_switch_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_switch_policy_vpc_protection_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_taboo_contract.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_action_rule_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_ep_retention_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_span_dst_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_span_src_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_span_src_group_to_dst_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vlan_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vlan_pool_encap_block.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vmm_credential.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_label.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_domain.py pylint:ansible-bad-function
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_staticleaf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_staticport.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_bd_l3out.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_vrf_region.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_vrf_region_cidr.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_vrf_region_cidr_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_template_deploy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_site.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_tenant.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aos/_aos_asn_pool.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_asn_pool.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_param.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_param.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_virtnet.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_blueprint_virtnet.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_device.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_device.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_external_router.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_external_router.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_ip_pool.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_ip_pool.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device_map.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_logical_device_map.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_login.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_login.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_rack_type.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_rack_type.py metaclass-boilerplate
lib/ansible/modules/network/aos/_aos_template.py future-import-boilerplate
lib/ansible/modules/network/aos/_aos_template.py metaclass-boilerplate
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_acl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/asa/asa_acl.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_acl.py validate-modules:undocumented-parameter
lib/ansible/modules/network/asa/asa_acl.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/asa/asa_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/asa/asa_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/asa/asa_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/asa/asa_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_config.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/network/asa/asa_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/asa/asa_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/asa/asa_og.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_og.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_actiongroupconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_actiongroupconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertemailconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertemailconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertscriptconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertscriptconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_analyticsprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_analyticsprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_api_session.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_api_session.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_api_version.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_api_version.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_api_version.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_api_version.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_applicationprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_applicationprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_authprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_authprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_backup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_backup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_backup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_backup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_backupconfiguration.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_backupconfiguration.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cloud.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloud.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cloudproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloudproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cluster.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cluster.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_clusterclouddetails.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_clusterclouddetails.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_controllerproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_controllerproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_dnspolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_dnspolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_errorpagebody.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_errorpagebody.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_errorpageprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_errorpageprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslb.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslb.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslbservice.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_healthmonitor.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_healthmonitor.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_httppolicyset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_httppolicyset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_ipaddrgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_ipaddrgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_l4policyset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_l4policyset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_microservicegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_microservicegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_network.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_network.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_network.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_networkprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_networkprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_pkiprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_pkiprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_pool.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_pool.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_pool.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_poolgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_poolgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_prioritylabels.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_prioritylabels.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_role.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_role.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_role.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_scheduler.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_scheduler.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_seproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_seproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_serviceengine.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serviceengine.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_serviceenginegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serviceenginegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_snmptrapprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_snmptrapprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_sslprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_sslprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_stringgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_stringgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_systemconfiguration.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_systemconfiguration.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_tenant.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_tenant.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_tenant.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_tenant.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_useraccount.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_useraccount.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_useraccountprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_useraccountprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_virtualservice.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_virtualservice.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_vrfcontext.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vrfcontext.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_vsdatascriptset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vsdatascriptset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_vsvip.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vsvip.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_webhook.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_webhook.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_webhook.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_webhook.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:doc-missing-type
lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:doc-missing-type
lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-missing-type
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/check_point/checkpoint_access_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/check_point/checkpoint_object_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/check_point/checkpoint_session.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/check_point/checkpoint_task_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cli/cli_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cli/cli_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cli/cli_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_aaa_server.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_acl.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_acl_advance.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_advance.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_acl_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bfd_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bfd_session.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_session.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bfd_view.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_view.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_command.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_command.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_command.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_config.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_config.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_dldp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_facts.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_facts.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_file_copy.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_file_copy.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_log.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_log.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ip_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ip_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_is_is_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_link_status.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_link_status.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_mlag_config.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_config.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_mtu.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mtu.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netconf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netconf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_export.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_export.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_template.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_template.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ntp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ospf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_reboot.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_reboot.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_rollback.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_rollback.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_sflow.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_sflow.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_community.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_community.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_location.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_location.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_user.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_user.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_startup.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_startup.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_static_route.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_static_route.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_static_route_bfd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_stp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_stp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_switchport.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_switchport.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vlan.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vlan.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrf_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrrp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrrp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudvision/cv_server_provision.py pylint:blacklisted-name
lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_backup.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_bgp.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_conditional_command.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_conditional_template.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_factory.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_factory.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_factory.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_facts.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_image.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_reload.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_reload.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_reload.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_rollback.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_save.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_save.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_save.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_showrun.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_showrun.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_showrun.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_template.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_vlag.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cumulus/nclu.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeos/edgeos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeswitch/edgeswitch_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/enos/enos_command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/enos/enos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/enos/enos_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_command.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/enos/enos_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/enos/enos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/enos/enos_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/enos/enos_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/enos/enos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/enos/enos_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_facts.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_vlan.py future-import-boilerplate
lib/ansible/modules/network/eos/_eos_vlan.py metaclass-boilerplate
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_banner.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_banner.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_command.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_command.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_config.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_config.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_eapi.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_eapi.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_logging.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_logging.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_system.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_system.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_user.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_user.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_vrf.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_vrf.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/exos/exos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/exos/exos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/exos/exos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/exos/exos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_apm_acl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_apm_network_access.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_apm_policy_fetch.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_apm_policy_import.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_appsvcs_extension.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_dos_application.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_fetch.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_import.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_manage.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_server_technology.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_signature_set.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_cli_alias.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_cli_script.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_configsync_action.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_auth.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_auth_ldap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_connectivity.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_group_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_ha_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_httpd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:return-syntax-error
lib/ansible/modules/network/f5/bigip_device_license.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_sshd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_syslog.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_traffic_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_trust.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_cache_resolver.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_nameserver.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_resolver.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_zone.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_file_copy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_dos_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_dos_vector.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_global_rules.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/network/f5/bigip_firewall_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_port_list.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_rule_list.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_schedule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_datacenter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_bigip.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_external.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_firepass.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_https.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp_half_open.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:undocumented-parameter
lib/ansible/modules/network/f5/bigip_gtm_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_topology_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_topology_region.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_virtual_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_wide_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_hostname.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_iapp_service.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_iapp_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ike_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_imish_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ipsec_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_irule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_log_destination.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_log_publisher.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_lx_package.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_management_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_protocol.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_router.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_transport_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_external.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_gateway_icmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_https.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_ldap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_snmp_dca.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_tcp_echo.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_tcp_half_open.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_udp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_partition.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_password_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:undocumented-parameter
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:undocumented-parameter
lib/ansible/modules/network/f5/bigip_profile_analytics.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_client_ssl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_fastl4.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_http_compression.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_oneconnect.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_persistence_cookie.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_persistence_src_addr.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_server_ssl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_udp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_provision.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_qkview.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_remote_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_remote_syslog.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_remote_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_routedomain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_selfip.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_service_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_smtp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snat_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snat_translation.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snmp_community.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snmp_trap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_software_image.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_software_install.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_software_update.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ssl_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ssl_key.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ssl_ocsp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_sys_daemon_log_tmm.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_sys_db.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_sys_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_traffic_selector.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_trunk.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_tunnel.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ucs.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ucs_fetch.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_vcmp_guest.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_virtual_address.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_virtual_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_wait.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fasthttp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fastl4_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:return-syntax-error
lib/ansible/modules/network/f5/bigiq_regkey_license.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_regkey_license_assignment.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_regkey_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_utility_license.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_utility_license_assignment.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortianalyzer/faz_device.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_device_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_device_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_device_provision_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device_provision_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool6.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_vip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwpol_ipv4.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwpol_package.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_fwpol_package.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_ha.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:doc-missing-type
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_query.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_appctrl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_av.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_ips.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_profile_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_proxy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_spam.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_ssl_ssh.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_voip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_waf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_wanopt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_web.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_address.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/fortios/fortios_address.py validate-modules:doc-missing-type
lib/ansible/modules/network/fortios/fortios_antivirus_quarantine.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy6.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_firewall_sniffer.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:doc-missing-type
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_profile.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_switch_controller_managed_switch.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_voip_profile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-choices-incompatible-type
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_wireless_controller_setting.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/frr/frr_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/icx/icx_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/illumos/dladm_etherstub.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_etherstub.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_iptun.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/dladm_linkprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/dladm_vlan.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/dladm_vnic.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/flowadm.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/flowadm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/illumos/flowadm.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_addr.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/ipadm_addrprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/network/illumos/ipadm_if.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_if.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_ifprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/network/illumos/ipadm_prop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:doc-missing-type
lib/ansible/modules/network/ingate/ig_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ingate/ig_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ingate/ig_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ingate/ig_config.py validate-modules:return-syntax-error
lib/ansible/modules/network/ingate/ig_unit_information.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ingate/ig_unit_information.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_banner.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_banner.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_command.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_command.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_config.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_config.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_facts.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_facts.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_lag_interfaces.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_logging.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_logging.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_ntp.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_ntp.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_static_route.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_static_route.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_system.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_system.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_user.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_user.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_vrf.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_vrf.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/itential/iap_start_workflow.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/itential/iap_token.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_banner.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_interfaces.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/network/junos/junos_lag_interfaces.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/junos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_package.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_package.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_ping.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_scp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_scp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_system.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/junos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_vlans.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/meraki/meraki_admin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_config_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_malware.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_mr_l3_firewall.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/network/meraki/meraki_mx_l3_firewall.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py pylint:ansible-bad-function
lib/ansible/modules/network/meraki/meraki_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_organization.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_ssid.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netconf/netconf_get.py validate-modules:doc-missing-type
lib/ansible/modules/network/netconf/netconf_get.py validate-modules:return-syntax-error
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:doc-missing-type
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:return-syntax-error
lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_cs_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:undocumented-parameter
lib/ansible/modules/network/netscaler/netscaler_gslb_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_gslb_site.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:undocumented-parameter
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:doc-missing-type
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py pylint:ansible-bad-function
lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_servicegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_ssl_certkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_cluster.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_cluster.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_ospf.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospf.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_ospfarea.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospfarea.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospfarea.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_show.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_show.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_show.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_show.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_trunk.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_trunk.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_trunk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vlag.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlag.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vlan.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlan.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouter.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouter.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouterif.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterif.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterif.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_vrouterif.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_access_list.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_access_list_ip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_cpu_class.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_dscp_map.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_fabric_local.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_igmp_snooping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_log_audit_exception.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/pn_port_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/pn_snmp_community.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_switch_setup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_vrouter_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_vrouter_pim_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nos/nos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/nos/nos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nos/nos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/nos/nos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nos/nos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_action.py validate-modules:doc-missing-type
lib/ansible/modules/network/nso/nso_action.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_config.py validate-modules:return-syntax-error
lib/ansible/modules/network/nso/nso_query.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_show.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_verify.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_ip_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_ip_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_mtu.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_mtu.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_portchannel.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_portchannel.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_switchport.py future-import-boilerplate
lib/ansible/modules/network/nxos/_nxos_switchport.py metaclass-boilerplate
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_aaa_server.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_acl.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_acl.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_acl_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_acl_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_banner.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_banner.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bfd_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bfd_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_bgp_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_config.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_config.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_evpn_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_evpn_vni.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_vni.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_facts.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_facts.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_feature.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_feature.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_gir.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_gir.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_hsrp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_hsrp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_igmp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_igmp_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_install_os.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_install_os.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_interface_ospf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_interface_ospf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_lag_interfaces.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_logging.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_logging.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ntp_auth.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_auth.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ntp_options.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_options.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_nxapi.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_nxapi.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ospf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_overlay_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_overlay_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_pim.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_pim_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ping.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ping.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_reboot.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_reboot.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_rollback.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_rollback.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_rpm.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_rpm.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_smu.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_smu.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snapshot.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snapshot.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_community.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_community.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_contact.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_contact.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_host.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_host.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_location.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_location.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_traps.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_traps.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_user.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_user.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_static_route.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_static_route.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_system.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_system.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_udld.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_udld.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_udld_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_udld_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_user.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_user.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_vpc.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vpc_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vrf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_vrf_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrf_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vrrp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrrp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vtp_domain.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_domain.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vtp_password.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_password.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vtp_version.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_version.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_igmp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_mlag_ipl.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_protocol.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/opx/opx_cps.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/opx/opx_cps.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ordnance/ordnance_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ordnance/ordnance_facts.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:doc-missing-type
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:doc-missing-type
lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:doc-missing-type
lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_admin.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_admin.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_admin.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_admin.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_admpwd.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_admpwd.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_admpwd.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_check.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_check.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_check.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_commit.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_commit.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_dag.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_dag.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_dag.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_dag_tags.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_dag_tags.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_import.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_import.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_import.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_interface.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_interface.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_lic.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_lic.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_lic.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_lic.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_loadcfg.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_loadcfg.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_loadcfg.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_match_rule.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_match_rule.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_mgtconfig.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_mgtconfig.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_mgtconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_nat_policy.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_nat_policy.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_object.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_object.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_object.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_object.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_object.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_op.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_op.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_op.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_op.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_pg.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_pg.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_pg.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_query_rules.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_query_rules.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_query_rules.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_query_rules.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_restart.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_restart.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_restart.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_sag.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_security_policy.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_security_policy.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_set.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_set.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_set.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/radware/vdirect_file.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_file.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/restconf/restconf_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/restconf/restconf_get.py validate-modules:doc-missing-type
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/routeros/routeros_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:doc-missing-type
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:undocumented-parameter
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:doc-missing-type
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:undocumented-parameter
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:doc-missing-type
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/sros/sros_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/sros/sros_command.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/sros/sros_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/sros/sros_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/sros/sros_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/sros/sros_rollback.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/voss/voss_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/voss/voss_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/voss/voss_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/voss/voss_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/voss/voss_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/_vyos_l3_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_l3_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/_vyos_linkagg.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_linkagg.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/_vyos_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_banner.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_banner.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_command.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_command.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_command.py pylint:blacklisted-name
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_config.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_config.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_facts.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_facts.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_lldp_interfaces.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_logging.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_logging.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_static_route.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_static_route.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_system.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_system.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_user.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_user.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/bearychat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/campfire.py validate-modules:doc-missing-type
lib/ansible/modules/notification/catapult.py validate-modules:doc-missing-type
lib/ansible/modules/notification/catapult.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/cisco_spark.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/cisco_spark.py validate-modules:doc-missing-type
lib/ansible/modules/notification/cisco_spark.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/flowdock.py validate-modules:doc-missing-type
lib/ansible/modules/notification/grove.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/hipchat.py validate-modules:doc-missing-type
lib/ansible/modules/notification/hipchat.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/irc.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/notification/irc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/irc.py validate-modules:doc-missing-type
lib/ansible/modules/notification/irc.py validate-modules:doc-required-mismatch
lib/ansible/modules/notification/irc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/irc.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/jabber.py validate-modules:doc-missing-type
lib/ansible/modules/notification/jabber.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/logentries_msg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/mail.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/mail.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/matrix.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/mattermost.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/mqtt.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/mqtt.py validate-modules:doc-missing-type
lib/ansible/modules/notification/mqtt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/nexmo.py validate-modules:doc-missing-type
lib/ansible/modules/notification/nexmo.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:doc-missing-type
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/pushbullet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/pushbullet.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/pushover.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/notification/pushover.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/pushover.py validate-modules:doc-missing-type
lib/ansible/modules/notification/pushover.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/rabbitmq_publish.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/rocketchat.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/notification/rocketchat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/say.py validate-modules:doc-missing-type
lib/ansible/modules/notification/sendgrid.py validate-modules:doc-missing-type
lib/ansible/modules/notification/sendgrid.py validate-modules:doc-required-mismatch
lib/ansible/modules/notification/sendgrid.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/sendgrid.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/slack.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/slack.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/syslogger.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/telegram.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/twilio.py validate-modules:doc-missing-type
lib/ansible/modules/notification/twilio.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/typetalk.py validate-modules:doc-missing-type
lib/ansible/modules/notification/typetalk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/bower.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/bower.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/bundler.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/language/bundler.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/bundler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/composer.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/language/composer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/cpanm.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/cpanm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/easy_install.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/language/easy_install.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/easy_install.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/gem.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/pear.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/language/pear.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/pear.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/pear.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/language/pip.py pylint:blacklisted-name
lib/ansible/modules/packaging/language/yarn.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/yarn.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apk.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/apk.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/apk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/apt.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/apt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_key.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/apt_repo.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/dnf.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/dpkg_selections.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/dpkg_selections.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/flatpak.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/flatpak.py validate-modules:use-run-command-not-popen
lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:use-run-command-not-popen
lib/ansible/modules/packaging/os/homebrew.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/homebrew.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/layman.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/layman.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/macports.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/macports.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/macports.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/opkg.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/opkg.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/package_facts.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/package_facts.py validate-modules:return-syntax-error
lib/ansible/modules/packaging/os/pacman.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/pacman.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/pacman.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkg5.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/pkg5.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkgin.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pkgin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkgin.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/pkgng.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pkgng.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkgng.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/pkgutil.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/portage.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/portage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/portage.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/portinstall.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/portinstall.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:return-syntax-error
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/rhsm_release.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/rpm_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/snap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/sorcery.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/sorcery.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/svr4pkg.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/swdepot.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/swdepot.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/swupd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/urpmi.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/urpmi.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/urpmi.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/xbps.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/xbps.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/xbps.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/yum.py pylint:blacklisted-name
lib/ansible/modules/packaging/os/yum.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/yum.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/yum.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/zypper.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/zypper.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/zypper.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cobbler/cobbler_sync.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cobbler/cobbler_sync.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cobbler/cobbler_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cobbler/cobbler_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_serial_port_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/foreman/_foreman.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/foreman/_katello.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/foreman/_katello.py yamllint:unparsable-with-libyaml
lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/hpilo/hpilo_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/hpilo/hponcfg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/imc/imc_rest.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/intersight/intersight_rest_api.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/lxca/lxca_cmms.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/lxca/lxca_nodes.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fc_network_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fc_network_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_ip_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_lan_connectivity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_mac_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_ntp_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_service_profile_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_timezone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_uuid_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_vlans.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_vlans.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/wakeonlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/bzr.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/git.py pylint:blacklisted-name
lib/ansible/modules/source_control/git.py use-argspec-type-path
lib/ansible/modules/source_control/git.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/git.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/git_config.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/git_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/_github_hooks.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:parameter-invalid
lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_issue.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_issue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_key.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_release.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_release.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_webhook.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_webhook_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/gitlab/gitlab_deploy_key.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/gitlab/gitlab_hook.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/gitlab/gitlab_runner.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/hg.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/hg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/subversion.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/subversion.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/subversion.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:doc-missing-type
lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/glusterfs/gluster_heal_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/glusterfs/gluster_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/ibm/ibm_sa_domain.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_host.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_host_ports.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_host_ports.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_pool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_vol.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_vol.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_vol_map.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_vol_map.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/infinidat/infini_fs.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/infinidat/infini_pool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_vol.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:incompatible-default-type
lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_check_connections.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:parameter-invalid
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_check_connections.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster_snmp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/na_elementsw_ldap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_network_interfaces.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_restore.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:parameter-invalid
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cifs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cifs_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cluster_ha.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cluster_peer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_disks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_export_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_export_policy_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_firmware_upgrade.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_flexcache.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_motd.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:parameter-invalid
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nvme.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nvme_namespace.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_nvme_namespace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nvme_subsystem.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_qos_policy_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_qos_policy_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_qtree.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_qtree.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snapmirror.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snmp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_software_update.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_svm_options.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_volume_clone.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_alerts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_amg_sync.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_amg_sync.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_asup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_auditlog.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_facts.py validate-modules:return-syntax-error
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_hostgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_iscsi_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_iscsi_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_iscsi_target.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_mgmt_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/purestorage/_purefa_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/_purefa_facts.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/_purefb_facts.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/purefa_alert.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_arrayname.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_connect.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_ds.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_dsrole.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_dsrole.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/purestorage/purefa_hg.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_info.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/purefa_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_offload.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_pg.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_pgsnap.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_pgsnap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/purestorage/purefa_phonehome.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_ra.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_smtp.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_snap.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_snmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_syslog.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_vg.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_ds.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_dsrole.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_fs.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/purestorage/purefb_info.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/purefb_s3acc.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_s3user.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/zfs/zfs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/aix_devices.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/aix_filesystem.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/aix_inittab.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/alternatives.py pylint:blacklisted-name
lib/ansible/modules/system/at.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/authorized_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/beadm.py pylint:blacklisted-name
lib/ansible/modules/system/cronvar.py pylint:blacklisted-name
lib/ansible/modules/system/dconf.py pylint:blacklisted-name
lib/ansible/modules/system/dconf.py validate-modules:doc-missing-type
lib/ansible/modules/system/dconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/filesystem.py pylint:blacklisted-name
lib/ansible/modules/system/filesystem.py validate-modules:doc-missing-type
lib/ansible/modules/system/gconftool2.py pylint:blacklisted-name
lib/ansible/modules/system/gconftool2.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/getent.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/hostname.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/interfaces_file.py pylint:blacklisted-name
lib/ansible/modules/system/interfaces_file.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/iptables.py pylint:blacklisted-name
lib/ansible/modules/system/java_cert.py pylint:blacklisted-name
lib/ansible/modules/system/java_keystore.py validate-modules:doc-missing-type
lib/ansible/modules/system/kernel_blacklist.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/known_hosts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/system/known_hosts.py validate-modules:doc-missing-type
lib/ansible/modules/system/known_hosts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/locale_gen.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/lvg.py pylint:blacklisted-name
lib/ansible/modules/system/lvol.py pylint:blacklisted-name
lib/ansible/modules/system/lvol.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/lvol.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/mksysb.py validate-modules:doc-missing-type
lib/ansible/modules/system/modprobe.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/nosh.py validate-modules:doc-missing-type
lib/ansible/modules/system/nosh.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/nosh.py validate-modules:return-syntax-error
lib/ansible/modules/system/openwrt_init.py validate-modules:doc-missing-type
lib/ansible/modules/system/openwrt_init.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/osx_defaults.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/pam_limits.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/parted.py pylint:blacklisted-name
lib/ansible/modules/system/puppet.py use-argspec-type-path
lib/ansible/modules/system/puppet.py validate-modules:parameter-invalid
lib/ansible/modules/system/puppet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/puppet.py validate-modules:undocumented-parameter
lib/ansible/modules/system/python_requirements_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/runit.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/system/runit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/runit.py validate-modules:undocumented-parameter
lib/ansible/modules/system/seboolean.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/selinux.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/selogin.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/selogin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/system/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/system/setup.py validate-modules:doc-missing-type
lib/ansible/modules/system/setup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/solaris_zone.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/sysctl.py validate-modules:doc-missing-type
lib/ansible/modules/system/sysctl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/syspatch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/system/systemd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/system/sysvinit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/system/timezone.py pylint:blacklisted-name
lib/ansible/modules/system/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/system/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/system/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/system/vdo.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/xfconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/utilities/logic/async_status.py use-argspec-type-path
lib/ansible/modules/utilities/logic/async_status.py validate-modules!skip
lib/ansible/modules/utilities/logic/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/utilities/logic/async_wrapper.py use-argspec-type-path
lib/ansible/modules/utilities/logic/async_wrapper.py pylint:ansible-bad-function
lib/ansible/modules/web_infrastructure/_nginx_status_facts.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/_nginx_status_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_cancel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_wait.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_label.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_organization.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:doc-required-mismatch
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_receive.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_role.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_settings.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/ansible_tower/tower_user.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_job.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_job_info.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_plugin.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/jenkins_script.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/jira.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jira.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/jira.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py pylint:blacklisted-name
lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/rundeck_project.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_aaa_group_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_dns_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_auth_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_exception.py validate-modules:return-syntax-error
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/_win_msi.py future-import-boilerplate
lib/ansible/modules/windows/_win_msi.py metaclass-boilerplate
lib/ansible/modules/windows/async_status.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/setup.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_acl_inheritance.ps1 pslint:PSAvoidTrailingWhitespace
lib/ansible/modules/windows/win_audit_rule.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_certificate_store.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_chocolatey_config.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_facts.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_source.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_copy.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_credential.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_credential.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_dns_client.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_domain.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSAvoidGlobalVars # New PR
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSAvoidGlobalVars # New PR
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_dotnet_ngen.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_dsc.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_dsc.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_eventlog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_feature.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_file_version.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_find.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_firewall_rule.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_http_proxy.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_iis_virtualdirectory.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webapplication.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webapppool.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_iis_website.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_inet_proxy.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_lineinfile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_mapped_drive.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_package.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_package.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # New PR - bug test_path should be testPath
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseSupportsShouldProcess
lib/ansible/modules/windows/win_product_facts.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_psexec.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSAvoidUsingInvokeExpression
lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_cap.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_rap.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_settings.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_regedit.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_region.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_region.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_regmerge.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_robocopy.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_say.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_share.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_shell.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_shortcut.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_snmp.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_unzip.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_unzip.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_updates.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_uri.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_user_profile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_user_profile.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_wait_for.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_webpicmd.ps1 pslint:PSAvoidUsingInvokeExpression
lib/ansible/modules/windows/win_xml.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/aireos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/aruba.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/asa.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/bigip.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/bigiq.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/ce.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ce_template.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/cnos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos10.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos6.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos9.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/enos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/eos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/exos.py action-plugin-docs # undocumented action plugin to fix
lib/ansible/plugins/action/ios.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/iosxr.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ironware.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/junos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/net_base.py action-plugin-docs # base class for other net_* action plugins which have a matching module
lib/ansible/plugins/action/netconf.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/network.py action-plugin-docs # base class for network action plugins
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/action/nxos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/slxos.py action-plugin-docs # undocumented action plugin to fix
lib/ansible/plugins/action/sros.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/voss.py action-plugin-docs # undocumented action plugin to fix
lib/ansible/plugins/action/vyos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/hipchat.py pylint:blacklisted-name
lib/ansible/plugins/connection/lxc.py pylint:blacklisted-name
lib/ansible/plugins/connection/vmware_tools.py yamllint:unparsable-with-libyaml
lib/ansible/plugins/doc_fragments/a10.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/a10.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aireos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aireos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/alicloud.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/alicloud.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aruba.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aruba.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/asa.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/asa.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/auth_basic.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/auth_basic.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/avi.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/avi.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws_credentials.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws_credentials.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws_region.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws_region.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/azure.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/azure.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/azure_tags.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/azure_tags.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/backup.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/backup.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ce.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ce.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/cnos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/cnos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos10.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos10.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos6.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos6.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos9.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos9.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/digital_ocean.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/digital_ocean.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata_wait.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata_wait.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ec2.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ec2.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/emc.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/emc.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/enos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/enos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/eos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/eos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/f5.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/f5.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/files.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/files.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/fortios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/fortios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/gcp.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/gcp.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hcloud.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hcloud.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hetzner.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hetzner.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hpe3par.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hpe3par.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hwc.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hwc.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/infinibox.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/infinibox.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/influxdb.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/influxdb.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ingate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ingate.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/intersight.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/intersight.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/iosxr.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/iosxr.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ipa.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ipa.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ironware.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ironware.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/junos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/junos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_auth_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_auth_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_name_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_name_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_resource_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_resource_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_scale_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_scale_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_state_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_state_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/keycloak.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/keycloak.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_common_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_common_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ldap.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ldap.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/lxca_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/lxca_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/manageiq.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/manageiq.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/meraki.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/meraki.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/mysql.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/mysql.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netapp.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netapp.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netconf.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netconf.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netscaler.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netscaler.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nso.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nso.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nxos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nxos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oneview.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oneview.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/online.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/online.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/onyx.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/onyx.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/opennebula.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/opennebula.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/openstack.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/openstack.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/openswitch.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/openswitch.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_display_name_option.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_display_name_option.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_name_option.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_name_option.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_tags.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_tags.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_wait_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_wait_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ovirt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ovirt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ovirt_info.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ovirt_info.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/panos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/panos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/postgres.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/postgres.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/proxysql.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/proxysql.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/purestorage.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/purestorage.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/rabbitmq.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/rabbitmq.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/rackspace.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/rackspace.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/scaleway.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/scaleway.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/skydive.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/skydive.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/sros.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/sros.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/tower.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/tower.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ucs.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ucs.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/url.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/url.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/utm.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/utm.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/validate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/validate.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vca.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vca.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vexata.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vexata.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vmware.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vmware.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vmware_rest_client.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vmware_rest_client.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vultr.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vultr.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vyos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vyos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/xenserver.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/xenserver.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/zabbix.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/zabbix.py metaclass-boilerplate
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
setup.py future-import-boilerplate
setup.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py metaclass-boilerplate
test/integration/targets/async/library/async_test.py future-import-boilerplate
test/integration/targets/async/library/async_test.py metaclass-boilerplate
test/integration/targets/async_fail/library/async_test.py future-import-boilerplate
test/integration/targets/async_fail/library/async_test.py metaclass-boilerplate
test/integration/targets/aws_lambda/files/mini_lambda.py future-import-boilerplate
test/integration/targets/aws_lambda/files/mini_lambda.py metaclass-boilerplate
test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/expect/files/test_command.py future-import-boilerplate
test/integration/targets/expect/files/test_command.py metaclass-boilerplate
test/integration/targets/get_url/files/testserver.py future-import-boilerplate
test/integration/targets/get_url/files/testserver.py metaclass-boilerplate
test/integration/targets/group/files/gidget.py future-import-boilerplate
test/integration/targets/group/files/gidget.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py metaclass-boilerplate
test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py future-import-boilerplate
test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py metaclass-boilerplate
test/integration/targets/inventory_kubevirt_conformance/server.py future-import-boilerplate
test/integration/targets/inventory_kubevirt_conformance/server.py metaclass-boilerplate
test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py future-import-boilerplate
test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py metaclass-boilerplate
test/integration/targets/lambda_policy/files/mini_http_lambda.py future-import-boilerplate
test/integration/targets/lambda_policy/files/mini_http_lambda.py metaclass-boilerplate
test/integration/targets/lookup_properties/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/ping.py future-import-boilerplate
test/integration/targets/module_precedence/lib_with_extension/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_utils/library/test.py future-import-boilerplate
test/integration/targets/module_utils/library/test.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_env_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_env_override.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_failure.py future-import-boilerplate
test/integration/targets/module_utils/library/test_failure.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_override.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/pause/test-pause.py future-import-boilerplate
test/integration/targets/pause/test-pause.py metaclass-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py future-import-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py metaclass-boilerplate
test/integration/targets/pip/files/setup.py future-import-boilerplate
test/integration/targets/pip/files/setup.py metaclass-boilerplate
test/integration/targets/run_modules/library/test.py future-import-boilerplate
test/integration/targets/run_modules/library/test.py metaclass-boilerplate
test/integration/targets/s3_bucket_notification/files/mini_lambda.py future-import-boilerplate
test/integration/targets/s3_bucket_notification/files/mini_lambda.py metaclass-boilerplate
test/integration/targets/script/files/no_shebang.py future-import-boilerplate
test/integration/targets/script/files/no_shebang.py metaclass-boilerplate
test/integration/targets/service/files/ansible_test_service.py future-import-boilerplate
test/integration/targets/service/files/ansible_test_service.py metaclass-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py future-import-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py metaclass-boilerplate
test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py future-import-boilerplate
test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py metaclass-boilerplate
test/integration/targets/supervisorctl/files/sendProcessStdin.py future-import-boilerplate
test/integration/targets/supervisorctl/files/sendProcessStdin.py metaclass-boilerplate
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/role_filter/filter_plugins/myplugin.py future-import-boilerplate
test/integration/targets/template/role_filter/filter_plugins/myplugin.py metaclass-boilerplate
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/test_infra/library/test.py future-import-boilerplate
test/integration/targets/test_infra/library/test.py metaclass-boilerplate
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/uri/files/testserver.py future-import-boilerplate
test/integration/targets/uri/files/testserver.py metaclass-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py future-import-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py metaclass-boilerplate
test/integration/targets/vars_prompt/test-vars_prompt.py future-import-boilerplate
test/integration/targets/vars_prompt/test-vars_prompt.py metaclass-boilerplate
test/integration/targets/vault/test-vault-client.py future-import-boilerplate
test/integration/targets/vault/test-vault-client.py metaclass-boilerplate
test/integration/targets/wait_for/files/testserver.py future-import-boilerplate
test/integration/targets/wait_for/files/testserver.py metaclass-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py future-import-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py metaclass-boilerplate
test/integration/targets/win_audit_rule/library/test_get_audit_rule.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_chocolatey/files/tools/chocolateyUninstall.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_chocolatey_source/library/choco_source.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # test setup requires vars to be set globally and not referenced in the same scope
test/integration/targets/win_csharp_utils/library/ansible_become_tests.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_iis_webbinding/library/test_get_webbindings.ps1 pslint:PSUseApprovedVerbs
test/integration/targets/win_module_utils/library/argv_parser_test.ps1 pslint:PSUseApprovedVerbs
test/integration/targets/win_module_utils/library/backup_file_test.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_module_utils/library/command_util_test.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings
test/integration/targets/win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_psmodule/files/module/template.psd1 pslint!skip
test/integration/targets/win_psmodule/files/module/template.psm1 pslint!skip
test/integration/targets/win_psmodule/files/setup_modules.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_reboot/templates/post_reboot.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_regmerge/templates/win_line_ending.j2 line-endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_stat/library/test_symlink_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_template/files/foo.dos.txt line-endings
test/integration/targets/win_user_right/library/test_get_right.ps1 pslint:PSCustomUseLiteralPath
test/legacy/cleanup_gce.py future-import-boilerplate
test/legacy/cleanup_gce.py metaclass-boilerplate
test/legacy/cleanup_gce.py pylint:blacklisted-name
test/legacy/cleanup_rax.py future-import-boilerplate
test/legacy/cleanup_rax.py metaclass-boilerplate
test/legacy/consul_running.py future-import-boilerplate
test/legacy/consul_running.py metaclass-boilerplate
test/legacy/gce_credentials.py future-import-boilerplate
test/legacy/gce_credentials.py metaclass-boilerplate
test/legacy/gce_credentials.py pylint:blacklisted-name
test/legacy/setup_gce.py future-import-boilerplate
test/legacy/setup_gce.py metaclass-boilerplate
test/lib/ansible_test/_data/requirements/constraints.txt test-constraints
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/units/config/manager/test_find_ini_config_file.py future-import-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py future-import-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py metaclass-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py pylint:blacklisted-name
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/mock/path.py future-import-boilerplate
test/units/mock/path.py metaclass-boilerplate
test/units/mock/yaml_helper.py future-import-boilerplate
test/units/mock/yaml_helper.py metaclass-boilerplate
test/units/module_utils/aws/test_aws_module.py metaclass-boilerplate
test/units/module_utils/basic/test__symbolic_mode_to_octal.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py metaclass-boilerplate
test/units/module_utils/basic/test_exit_json.py future-import-boilerplate
test/units/module_utils/basic/test_get_file_attributes.py future-import-boilerplate
test/units/module_utils/basic/test_heuristic_log_sanitize.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/basic/test_safe_eval.py future-import-boilerplate
test/units/module_utils/basic/test_tmpdir.py future-import-boilerplate
test/units/module_utils/cloud/test_backoff.py future-import-boilerplate
test/units/module_utils/cloud/test_backoff.py metaclass-boilerplate
test/units/module_utils/common/test_dict_transformations.py future-import-boilerplate
test/units/module_utils/common/test_dict_transformations.py metaclass-boilerplate
test/units/module_utils/conftest.py future-import-boilerplate
test/units/module_utils/conftest.py metaclass-boilerplate
test/units/module_utils/facts/base.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py metaclass-boilerplate
test/units/module_utils/facts/network/test_generic_bsd.py future-import-boilerplate
test/units/module_utils/facts/other/test_facter.py future-import-boilerplate
test/units/module_utils/facts/other/test_ohai.py future-import-boilerplate
test/units/module_utils/facts/system/test_lsb.py future-import-boilerplate
test/units/module_utils/facts/test_ansible_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collectors.py future-import-boilerplate
test/units/module_utils/facts/test_facts.py future-import-boilerplate
test/units/module_utils/facts/test_timeout.py future-import-boilerplate
test/units/module_utils/facts/test_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_auth.py future-import-boilerplate
test/units/module_utils/gcp/test_auth.py metaclass-boilerplate
test/units/module_utils/gcp/test_gcp_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_gcp_utils.py metaclass-boilerplate
test/units/module_utils/gcp/test_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_utils.py metaclass-boilerplate
test/units/module_utils/hwc/test_dict_comparison.py future-import-boilerplate
test/units/module_utils/hwc/test_dict_comparison.py metaclass-boilerplate
test/units/module_utils/hwc/test_hwc_utils.py future-import-boilerplate
test/units/module_utils/hwc/test_hwc_utils.py metaclass-boilerplate
test/units/module_utils/json_utils/test_filter_non_json_lines.py future-import-boilerplate
test/units/module_utils/net_tools/test_netbox.py future-import-boilerplate
test/units/module_utils/net_tools/test_netbox.py metaclass-boilerplate
test/units/module_utils/network/avi/test_avi_api_utils.py future-import-boilerplate
test/units/module_utils/network/avi/test_avi_api_utils.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_common.py future-import-boilerplate
test/units/module_utils/network/ftd/test_common.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_configuration.py future-import-boilerplate
test/units/module_utils/network/ftd/test_configuration.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_device.py future-import-boilerplate
test/units/module_utils/network/ftd/test_device.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_parser.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_parser.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_validator.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_validator.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_upsert_functionality.py future-import-boilerplate
test/units/module_utils/network/ftd/test_upsert_functionality.py metaclass-boilerplate
test/units/module_utils/network/nso/test_nso.py metaclass-boilerplate
test/units/module_utils/parsing/test_convert_bool.py future-import-boilerplate
test/units/module_utils/postgresql/test_postgres.py future-import-boilerplate
test/units/module_utils/postgresql/test_postgres.py metaclass-boilerplate
test/units/module_utils/remote_management/dellemc/test_ome.py future-import-boilerplate
test/units/module_utils/remote_management/dellemc/test_ome.py metaclass-boilerplate
test/units/module_utils/test_database.py future-import-boilerplate
test/units/module_utils/test_database.py metaclass-boilerplate
test/units/module_utils/test_distro.py future-import-boilerplate
test/units/module_utils/test_distro.py metaclass-boilerplate
test/units/module_utils/test_hetzner.py future-import-boilerplate
test/units/module_utils/test_hetzner.py metaclass-boilerplate
test/units/module_utils/test_kubevirt.py future-import-boilerplate
test/units/module_utils/test_kubevirt.py metaclass-boilerplate
test/units/module_utils/test_netapp.py future-import-boilerplate
test/units/module_utils/test_text.py future-import-boilerplate
test/units/module_utils/test_utm_utils.py future-import-boilerplate
test/units/module_utils/test_utm_utils.py metaclass-boilerplate
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/xenserver/FakeAnsibleModule.py future-import-boilerplate
test/units/module_utils/xenserver/FakeAnsibleModule.py metaclass-boilerplate
test/units/module_utils/xenserver/FakeXenAPI.py future-import-boilerplate
test/units/module_utils/xenserver/FakeXenAPI.py metaclass-boilerplate
test/units/modules/cloud/google/test_gce_tag.py future-import-boilerplate
test/units/modules/cloud/google/test_gce_tag.py metaclass-boilerplate
test/units/modules/cloud/google/test_gcp_forwarding_rule.py future-import-boilerplate
test/units/modules/cloud/google/test_gcp_forwarding_rule.py metaclass-boilerplate
test/units/modules/cloud/google/test_gcp_url_map.py future-import-boilerplate
test/units/modules/cloud/google/test_gcp_url_map.py metaclass-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_rs.py future-import-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_rs.py metaclass-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_vm.py future-import-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_vm.py metaclass-boilerplate
test/units/modules/cloud/linode/conftest.py future-import-boilerplate
test/units/modules/cloud/linode/conftest.py metaclass-boilerplate
test/units/modules/cloud/linode/test_linode.py metaclass-boilerplate
test/units/modules/cloud/linode_v4/conftest.py future-import-boilerplate
test/units/modules/cloud/linode_v4/conftest.py metaclass-boilerplate
test/units/modules/cloud/linode_v4/test_linode_v4.py metaclass-boilerplate
test/units/modules/cloud/misc/test_terraform.py future-import-boilerplate
test/units/modules/cloud/misc/test_terraform.py metaclass-boilerplate
test/units/modules/cloud/misc/virt_net/conftest.py future-import-boilerplate
test/units/modules/cloud/misc/virt_net/conftest.py metaclass-boilerplate
test/units/modules/cloud/misc/virt_net/test_virt_net.py future-import-boilerplate
test/units/modules/cloud/misc/virt_net/test_virt_net.py metaclass-boilerplate
test/units/modules/cloud/openstack/test_os_server.py future-import-boilerplate
test/units/modules/cloud/openstack/test_os_server.py metaclass-boilerplate
test/units/modules/cloud/xenserver/FakeAnsibleModule.py future-import-boilerplate
test/units/modules/cloud/xenserver/FakeAnsibleModule.py metaclass-boilerplate
test/units/modules/cloud/xenserver/FakeXenAPI.py future-import-boilerplate
test/units/modules/cloud/xenserver/FakeXenAPI.py metaclass-boilerplate
test/units/modules/conftest.py future-import-boilerplate
test/units/modules/conftest.py metaclass-boilerplate
test/units/modules/files/test_copy.py future-import-boilerplate
test/units/modules/messaging/rabbitmq/test_rabbimq_user.py future-import-boilerplate
test/units/modules/messaging/rabbitmq/test_rabbimq_user.py metaclass-boilerplate
test/units/modules/monitoring/test_circonus_annotation.py future-import-boilerplate
test/units/modules/monitoring/test_circonus_annotation.py metaclass-boilerplate
test/units/modules/monitoring/test_icinga2_feature.py future-import-boilerplate
test/units/modules/monitoring/test_icinga2_feature.py metaclass-boilerplate
test/units/modules/monitoring/test_pagerduty.py future-import-boilerplate
test/units/modules/monitoring/test_pagerduty.py metaclass-boilerplate
test/units/modules/monitoring/test_pagerduty_alert.py future-import-boilerplate
test/units/modules/monitoring/test_pagerduty_alert.py metaclass-boilerplate
test/units/modules/net_tools/test_nmcli.py future-import-boilerplate
test/units/modules/net_tools/test_nmcli.py metaclass-boilerplate
test/units/modules/network/avi/test_avi_user.py future-import-boilerplate
test/units/modules/network/avi/test_avi_user.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_access_rule.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_access_rule.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_host.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_host.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_session.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_session.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_task_facts.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_task_facts.py metaclass-boilerplate
test/units/modules/network/cloudvision/test_cv_server_provision.py future-import-boilerplate
test/units/modules/network/cloudvision/test_cv_server_provision.py metaclass-boilerplate
test/units/modules/network/cumulus/test_nclu.py future-import-boilerplate
test/units/modules/network/cumulus/test_nclu.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_configuration.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_configuration.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_file_download.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_file_download.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_file_upload.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_file_upload.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_install.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_install.py metaclass-boilerplate
test/units/modules/network/netscaler/netscaler_module.py future-import-boilerplate
test/units/modules/network/netscaler/netscaler_module.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_action.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_action.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_policy.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_policy.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_service.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_service.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_site.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_site.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_monitor.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_monitor.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_module_utils.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_module_utils.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_nitro_request.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_nitro_request.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_save_config.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_save_config.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_server.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_server.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_service.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_service.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_servicegroup.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_servicegroup.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py metaclass-boilerplate
test/units/modules/network/nso/nso_module.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_action.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_config.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_query.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_show.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_verify.py metaclass-boilerplate
test/units/modules/network/nuage/nuage_module.py future-import-boilerplate
test/units/modules/network/nuage/nuage_module.py metaclass-boilerplate
test/units/modules/network/nuage/test_nuage_vspk.py future-import-boilerplate
test/units/modules/network/nuage/test_nuage_vspk.py metaclass-boilerplate
test/units/modules/network/nxos/test_nxos_acl_interface.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_commit.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_commit.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_file.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_file.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_runnable.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_runnable.py metaclass-boilerplate
test/units/modules/notification/test_slack.py future-import-boilerplate
test/units/modules/notification/test_slack.py metaclass-boilerplate
test/units/modules/packaging/language/test_gem.py future-import-boilerplate
test/units/modules/packaging/language/test_gem.py metaclass-boilerplate
test/units/modules/packaging/language/test_pip.py future-import-boilerplate
test/units/modules/packaging/language/test_pip.py metaclass-boilerplate
test/units/modules/packaging/os/conftest.py future-import-boilerplate
test/units/modules/packaging/os/conftest.py metaclass-boilerplate
test/units/modules/packaging/os/test_apk.py future-import-boilerplate
test/units/modules/packaging/os/test_apk.py metaclass-boilerplate
test/units/modules/packaging/os/test_apt.py future-import-boilerplate
test/units/modules/packaging/os/test_apt.py metaclass-boilerplate
test/units/modules/packaging/os/test_apt.py pylint:blacklisted-name
test/units/modules/packaging/os/test_rhn_channel.py future-import-boilerplate
test/units/modules/packaging/os/test_rhn_channel.py metaclass-boilerplate
test/units/modules/packaging/os/test_rhn_register.py future-import-boilerplate
test/units/modules/packaging/os/test_rhn_register.py metaclass-boilerplate
test/units/modules/packaging/os/test_yum.py future-import-boilerplate
test/units/modules/packaging/os/test_yum.py metaclass-boilerplate
test/units/modules/remote_management/dellemc/test_ome_device_info.py future-import-boilerplate
test/units/modules/remote_management/dellemc/test_ome_device_info.py metaclass-boilerplate
test/units/modules/remote_management/lxca/test_lxca_cmms.py future-import-boilerplate
test/units/modules/remote_management/lxca/test_lxca_cmms.py metaclass-boilerplate
test/units/modules/remote_management/lxca/test_lxca_nodes.py future-import-boilerplate
test/units/modules/remote_management/lxca/test_lxca_nodes.py metaclass-boilerplate
test/units/modules/remote_management/oneview/conftest.py future-import-boilerplate
test/units/modules/remote_management/oneview/conftest.py metaclass-boilerplate
test/units/modules/remote_management/oneview/hpe_test_utils.py future-import-boilerplate
test/units/modules/remote_management/oneview/hpe_test_utils.py metaclass-boilerplate
test/units/modules/remote_management/oneview/oneview_module_loader.py future-import-boilerplate
test/units/modules/remote_management/oneview/oneview_module_loader.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_datacenter_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_datacenter_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_enclosure_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_enclosure_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager_info.py metaclass-boilerplate
test/units/modules/source_control/gitlab/gitlab.py future-import-boilerplate
test/units/modules/source_control/gitlab/gitlab.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_deploy_key.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_deploy_key.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_group.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_group.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_hook.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_hook.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_project.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_project.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_runner.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_runner.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_user.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_user.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_access_key.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_access_key.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_key_pair.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_key_pair.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_known_host.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_known_host.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_variable.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_variable.py metaclass-boilerplate
test/units/modules/storage/hpe3par/test_ss_3par_cpg.py future-import-boilerplate
test/units/modules/storage/hpe3par/test_ss_3par_cpg.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_initiators.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_initiators.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_aggregate.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_aggregate.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_autosupport.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_autosupport.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs_server.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs_server.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_command.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_command.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_flexcache.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_flexcache.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_info.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_info.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_interface.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ipspace.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ipspace.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_job_schedule.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_job_schedule.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_copy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_copy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_map.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_map.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_motd.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_motd.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_port.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_port.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_routes.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_routes.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_subnet.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_subnet.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nfs.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nfs.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_portset.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_portset.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_quotas.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_quotas.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapmirror.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapmirror.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_software_update.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_software_update.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_svm.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_svm.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ucadapter.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ucadapter.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_group.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_group.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_user.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_user.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user_role.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user_role.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume_clone.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume_clone.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py metaclass-boilerplate
test/units/modules/storage/netapp/test_netapp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_netapp_e_alerts.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_asup.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_auditlog.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_global.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_host.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_iscsi_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_iscsi_target.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_ldap.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_mgmt_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_syslog.py future-import-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py future-import-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py metaclass-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py pylint:blacklisted-name
test/units/modules/system/test_iptables.py future-import-boilerplate
test/units/modules/system/test_iptables.py metaclass-boilerplate
test/units/modules/system/test_java_keystore.py future-import-boilerplate
test/units/modules/system/test_java_keystore.py metaclass-boilerplate
test/units/modules/system/test_known_hosts.py future-import-boilerplate
test/units/modules/system/test_known_hosts.py metaclass-boilerplate
test/units/modules/system/test_known_hosts.py pylint:ansible-bad-function
test/units/modules/system/test_linux_mountinfo.py future-import-boilerplate
test/units/modules/system/test_linux_mountinfo.py metaclass-boilerplate
test/units/modules/system/test_pamd.py metaclass-boilerplate
test/units/modules/system/test_parted.py future-import-boilerplate
test/units/modules/system/test_systemd.py future-import-boilerplate
test/units/modules/system/test_systemd.py metaclass-boilerplate
test/units/modules/system/test_ufw.py future-import-boilerplate
test/units/modules/system/test_ufw.py metaclass-boilerplate
test/units/modules/utils.py future-import-boilerplate
test/units/modules/utils.py metaclass-boilerplate
test/units/modules/web_infrastructure/test_apache2_module.py future-import-boilerplate
test/units/modules/web_infrastructure/test_apache2_module.py metaclass-boilerplate
test/units/modules/web_infrastructure/test_jenkins_plugin.py future-import-boilerplate
test/units/modules/web_infrastructure/test_jenkins_plugin.py metaclass-boilerplate
test/units/parsing/utils/test_addresses.py future-import-boilerplate
test/units/parsing/utils/test_addresses.py metaclass-boilerplate
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/playbook/test_attribute.py future-import-boilerplate
test/units/playbook/test_attribute.py metaclass-boilerplate
test/units/playbook/test_conditional.py future-import-boilerplate
test/units/playbook/test_conditional.py metaclass-boilerplate
test/units/plugins/action/test_synchronize.py future-import-boilerplate
test/units/plugins/action/test_synchronize.py metaclass-boilerplate
test/units/plugins/httpapi/test_ftd.py future-import-boilerplate
test/units/plugins/httpapi/test_ftd.py metaclass-boilerplate
test/units/plugins/inventory/test_constructed.py future-import-boilerplate
test/units/plugins/inventory/test_constructed.py metaclass-boilerplate
test/units/plugins/inventory/test_group.py future-import-boilerplate
test/units/plugins/inventory/test_group.py metaclass-boilerplate
test/units/plugins/inventory/test_host.py future-import-boilerplate
test/units/plugins/inventory/test_host.py metaclass-boilerplate
test/units/plugins/loader_fixtures/import_fixture.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py metaclass-boilerplate
test/units/plugins/shell/test_powershell.py future-import-boilerplate
test/units/plugins/shell/test_powershell.py metaclass-boilerplate
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/test_constants.py future-import-boilerplate
test/units/test_context.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py metaclass-boilerplate
test/units/utils/kubevirt_fixtures.py future-import-boilerplate
test/units/utils/kubevirt_fixtures.py metaclass-boilerplate
test/units/utils/test_cleanup_tmp_file.py future-import-boilerplate
test/units/utils/test_encrypt.py future-import-boilerplate
test/units/utils/test_encrypt.py metaclass-boilerplate
test/units/utils/test_helpers.py future-import-boilerplate
test/units/utils/test_helpers.py metaclass-boilerplate
test/units/utils/test_shlex.py future-import-boilerplate
test/units/utils/test_shlex.py metaclass-boilerplate
test/utils/shippable/check_matrix.py replace-urlopen
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,530 |
Module ios_l3_interfaces doesn't set ipv6 correctly when state replaced
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
New network modules were added in ansible version 2.9. While testing I came across a bug. More precisely module **ios_l3_interfaces** doesn't set ipv6 addresses correctly when state set to **replaced**. Instead of replacing IP, module just adds it (same behavior as when state is set to merged).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_l3_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Nothing that would affect module logic
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Host = Debian 9, python 2.7.13
Target = Cisco WS-C4500X-16 Version 03.11.00.E
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Before config:
```
interface Vlan100
ipv6 address 2001:DB8::1/32
ipv6 enable
```
Ansible playbook example:
```yaml
- name: Configure IPv6
hosts: cisco4500
gather_facts: yes
tasks:
- ios_l3_interfaces:
config:
- name: Vlan100
ipv6:
- address: 2001:DB8::2/32
state: replaced
```
##### EXPECTED RESULTS
I expect module to change IP address from 2001:DB8::1/32 to 2001:DB8::2/32
It works like this for ipv4 (same module) and for ipv4/ipv6 in junos_l3_interfaces.
Expected after config:
```
interface Vlan100
ipv6 address 2001:DB8::2/32
ipv6 enable
```
##### ACTUAL RESULTS
It merges instead of replaces.
Actual after config:
```
interface Vlan100
ipv6 address 2001:DB8::1/32
ipv6 address 2001:DB8::2/32
ipv6 enable
```
Ansible verbose output:
```
changed: [cisco4500] => {
"after": [
{
"name": "loopback0"
},
{
"name": "FastEthernet1"
},
{
"name": "TenGigabitEthernet1/1"
},
{
"name": "TenGigabitEthernet1/2"
},
{
"name": "TenGigabitEthernet1/3"
},
{
"name": "TenGigabitEthernet1/4"
},
{
"name": "TenGigabitEthernet1/5"
},
{
"name": "TenGigabitEthernet1/6"
},
{
"name": "TenGigabitEthernet1/7"
},
{
"name": "TenGigabitEthernet1/8"
},
{
"name": "TenGigabitEthernet1/9"
},
{
"name": "TenGigabitEthernet1/10"
},
{
"name": "TenGigabitEthernet1/11"
},
{
"name": "TenGigabitEthernet1/12"
},
{
"name": "TenGigabitEthernet1/13"
},
{
"name": "TenGigabitEthernet1/14"
},
{
"name": "TenGigabitEthernet1/15"
},
{
"name": "TenGigabitEthernet1/16"
},
{
"ipv6": [
{
"address": "2001:DB8::1/32"
},
{
"address": "2001:DB8::2/32"
}
],
"name": "Vlan100"
}
],
"before": [
{
"name": "loopback0"
},
{
"name": "FastEthernet1"
},
{
"name": "TenGigabitEthernet1/1"
},
{
"name": "TenGigabitEthernet1/2"
},
{
"name": "TenGigabitEthernet1/3"
},
{
"name": "TenGigabitEthernet1/4"
},
{
"name": "TenGigabitEthernet1/5"
},
{
"name": "TenGigabitEthernet1/6"
},
{
"name": "TenGigabitEthernet1/7"
},
{
"name": "TenGigabitEthernet1/8"
},
{
"name": "TenGigabitEthernet1/9"
},
{
"name": "TenGigabitEthernet1/10"
},
{
"name": "TenGigabitEthernet1/11"
},
{
"name": "TenGigabitEthernet1/12"
},
{
"name": "TenGigabitEthernet1/13"
},
{
"name": "TenGigabitEthernet1/14"
},
{
"name": "TenGigabitEthernet1/15"
},
{
"name": "TenGigabitEthernet1/16"
},
{
"ipv6": [
{
"address": "2001:DB8::1/32"
}
],
"name": "Vlan100"
}
],
"changed": true,
"commands": [
"interface Vlan100",
"ipv6 address 2001:DB8::2/32"
],
"invocation": {
"module_args": {
"config": [
{
"ipv6": [
{
"address": "2001:DB8::2/32"
}
],
"name": "Vlan100"
}
],
"state": "replaced"
}
}
}
```
|
https://github.com/ansible/ansible/issues/66530
|
https://github.com/ansible/ansible/pull/66654
|
ebf21bb48d565b1a860fa7be9b1149a18f52a7da
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
| 2020-01-16T13:49:43Z |
python
| 2020-01-22T08:05:51Z |
lib/ansible/module_utils/network/ios/utils/utils.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# utils
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.six import iteritems
from ansible.module_utils.network.common.utils import is_masklen, to_netmask
def remove_command_from_config_list(interface, cmd, commands):
# To delete the passed config
if interface not in commands:
commands.insert(0, interface)
commands.append('no %s' % cmd)
return commands
def add_command_to_config_list(interface, cmd, commands):
# To set the passed config
if interface not in commands:
commands.insert(0, interface)
commands.append(cmd)
def dict_to_set(sample_dict):
# Generate a set with passed dictionary for comparison
test_dict = dict()
if isinstance(sample_dict, dict):
for k, v in iteritems(sample_dict):
if v is not None:
if isinstance(v, list):
if isinstance(v[0], dict):
li = []
for each in v:
for key, value in iteritems(each):
if isinstance(value, list):
each[key] = tuple(value)
li.append(tuple(iteritems(each)))
v = tuple(li)
else:
v = tuple(v)
elif isinstance(v, dict):
li = []
for key, value in iteritems(v):
if isinstance(value, list):
v[key] = tuple(value)
li.extend(tuple(iteritems(v)))
v = tuple(li)
test_dict.update({k: v})
return_set = set(tuple(iteritems(test_dict)))
else:
return_set = set(sample_dict)
return return_set
def filter_dict_having_none_value(want, have):
# Generate dict with have dict value which is None in want dict
test_dict = dict()
test_key_dict = dict()
name = want.get('name')
if name:
test_dict['name'] = name
diff_ip = False
want_ip = ''
for k, v in iteritems(want):
if isinstance(v, dict):
for key, value in iteritems(v):
if value is None:
dict_val = have.get(k).get(key)
test_key_dict.update({key: dict_val})
test_dict.update({k: test_key_dict})
if isinstance(v, list):
for key, value in iteritems(v[0]):
if value is None:
dict_val = have.get(k).get(key)
test_key_dict.update({key: dict_val})
test_dict.update({k: test_key_dict})
# below conditions checks are added to check if
# secondary IP is configured, if yes then delete
# the already configured IP if want and have IP
# is different else if it's same no need to delete
for each in v:
if each.get('secondary'):
want_ip = each.get('address').split('/')
have_ip = have.get('ipv4')
if len(want_ip) > 1 and have_ip and have_ip[0].get('secondary'):
have_ip = have_ip[0]['address'].split(' ')[0]
if have_ip != want_ip[0]:
diff_ip = True
if each.get('secondary') and diff_ip is True:
test_key_dict.update({'secondary': True})
test_dict.update({'ipv4': test_key_dict})
if v is None:
val = have.get(k)
test_dict.update({k: val})
return test_dict
def remove_duplicate_interface(commands):
# Remove duplicate interface from commands
set_cmd = []
for each in commands:
if 'interface' in each:
if each not in set_cmd:
set_cmd.append(each)
else:
set_cmd.append(each)
return set_cmd
def validate_ipv4(value, module):
if value:
address = value.split('/')
if len(address) != 2:
module.fail_json(msg='address format is <ipv4 address>/<mask>, got invalid format {0}'.format(value))
if not is_masklen(address[1]):
module.fail_json(msg='invalid value for mask: {0}, mask should be in range 0-32'.format(address[1]))
def validate_ipv6(value, module):
if value:
address = value.split('/')
if len(address) != 2:
module.fail_json(msg='address format is <ipv6 address>/<mask>, got invalid format {0}'.format(value))
else:
if not 0 <= int(address[1]) <= 128:
module.fail_json(msg='invalid value for mask: {0}, mask should be in range 0-128'.format(address[1]))
def validate_n_expand_ipv4(module, want):
# Check if input IPV4 is valid IP and expand IPV4 with its subnet mask
ip_addr_want = want.get('address')
if len(ip_addr_want.split(' ')) > 1:
return ip_addr_want
validate_ipv4(ip_addr_want, module)
ip = ip_addr_want.split('/')
if len(ip) == 2:
ip_addr_want = '{0} {1}'.format(ip[0], to_netmask(ip[1]))
return ip_addr_want
def normalize_interface(name):
"""Return the normalized interface name
"""
if not name:
return
def _get_number(name):
digits = ''
for char in name:
if char.isdigit() or char in '/.':
digits += char
return digits
if name.lower().startswith('gi'):
if_type = 'GigabitEthernet'
elif name.lower().startswith('te'):
if_type = 'TenGigabitEthernet'
elif name.lower().startswith('fa'):
if_type = 'FastEthernet'
elif name.lower().startswith('fo'):
if_type = 'FortyGigabitEthernet'
elif name.lower().startswith('long'):
if_type = 'LongReachEthernet'
elif name.lower().startswith('et'):
if_type = 'Ethernet'
elif name.lower().startswith('vl'):
if_type = 'Vlan'
elif name.lower().startswith('lo'):
if_type = 'loopback'
elif name.lower().startswith('po'):
if_type = 'Port-channel'
elif name.lower().startswith('nv'):
if_type = 'nve'
elif name.lower().startswith('twe'):
if_type = 'TwentyFiveGigE'
elif name.lower().startswith('hu'):
if_type = 'HundredGigE'
else:
if_type = None
number_list = name.split(' ')
if len(number_list) == 2:
number = number_list[-1].strip()
else:
number = _get_number(name)
if if_type:
proper_interface = if_type + number
else:
proper_interface = name
return proper_interface
def get_interface_type(interface):
"""Gets the type of interface
"""
if interface.upper().startswith('GI'):
return 'GigabitEthernet'
elif interface.upper().startswith('TE'):
return 'TenGigabitEthernet'
elif interface.upper().startswith('FA'):
return 'FastEthernet'
elif interface.upper().startswith('FO'):
return 'FortyGigabitEthernet'
elif interface.upper().startswith('LON'):
return 'LongReachEthernet'
elif interface.upper().startswith('ET'):
return 'Ethernet'
elif interface.upper().startswith('VL'):
return 'Vlan'
elif interface.upper().startswith('LO'):
return 'loopback'
elif interface.upper().startswith('PO'):
return 'Port-channel'
elif interface.upper().startswith('NV'):
return 'nve'
elif interface.upper().startswith('TWE'):
return 'TwentyFiveGigE'
elif interface.upper().startswith('HU'):
return 'HundredGigE'
else:
return 'unknown'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,530 |
Module ios_l3_interfaces doesn't set ipv6 correctly when state replaced
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
New network modules were added in ansible version 2.9. While testing I came across a bug. More precisely module **ios_l3_interfaces** doesn't set ipv6 addresses correctly when state set to **replaced**. Instead of replacing IP, module just adds it (same behavior as when state is set to merged).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_l3_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Nothing that would affect module logic
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Host = Debian 9, python 2.7.13
Target = Cisco WS-C4500X-16 Version 03.11.00.E
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Before config:
```
interface Vlan100
ipv6 address 2001:DB8::1/32
ipv6 enable
```
Ansible playbook example:
```yaml
- name: Configure IPv6
hosts: cisco4500
gather_facts: yes
tasks:
- ios_l3_interfaces:
config:
- name: Vlan100
ipv6:
- address: 2001:DB8::2/32
state: replaced
```
##### EXPECTED RESULTS
I expect module to change IP address from 2001:DB8::1/32 to 2001:DB8::2/32
It works like this for ipv4 (same module) and for ipv4/ipv6 in junos_l3_interfaces.
Expected after config:
```
interface Vlan100
ipv6 address 2001:DB8::2/32
ipv6 enable
```
##### ACTUAL RESULTS
It merges instead of replaces.
Actual after config:
```
interface Vlan100
ipv6 address 2001:DB8::1/32
ipv6 address 2001:DB8::2/32
ipv6 enable
```
Ansible verbose output:
```
changed: [cisco4500] => {
"after": [
{
"name": "loopback0"
},
{
"name": "FastEthernet1"
},
{
"name": "TenGigabitEthernet1/1"
},
{
"name": "TenGigabitEthernet1/2"
},
{
"name": "TenGigabitEthernet1/3"
},
{
"name": "TenGigabitEthernet1/4"
},
{
"name": "TenGigabitEthernet1/5"
},
{
"name": "TenGigabitEthernet1/6"
},
{
"name": "TenGigabitEthernet1/7"
},
{
"name": "TenGigabitEthernet1/8"
},
{
"name": "TenGigabitEthernet1/9"
},
{
"name": "TenGigabitEthernet1/10"
},
{
"name": "TenGigabitEthernet1/11"
},
{
"name": "TenGigabitEthernet1/12"
},
{
"name": "TenGigabitEthernet1/13"
},
{
"name": "TenGigabitEthernet1/14"
},
{
"name": "TenGigabitEthernet1/15"
},
{
"name": "TenGigabitEthernet1/16"
},
{
"ipv6": [
{
"address": "2001:DB8::1/32"
},
{
"address": "2001:DB8::2/32"
}
],
"name": "Vlan100"
}
],
"before": [
{
"name": "loopback0"
},
{
"name": "FastEthernet1"
},
{
"name": "TenGigabitEthernet1/1"
},
{
"name": "TenGigabitEthernet1/2"
},
{
"name": "TenGigabitEthernet1/3"
},
{
"name": "TenGigabitEthernet1/4"
},
{
"name": "TenGigabitEthernet1/5"
},
{
"name": "TenGigabitEthernet1/6"
},
{
"name": "TenGigabitEthernet1/7"
},
{
"name": "TenGigabitEthernet1/8"
},
{
"name": "TenGigabitEthernet1/9"
},
{
"name": "TenGigabitEthernet1/10"
},
{
"name": "TenGigabitEthernet1/11"
},
{
"name": "TenGigabitEthernet1/12"
},
{
"name": "TenGigabitEthernet1/13"
},
{
"name": "TenGigabitEthernet1/14"
},
{
"name": "TenGigabitEthernet1/15"
},
{
"name": "TenGigabitEthernet1/16"
},
{
"ipv6": [
{
"address": "2001:DB8::1/32"
}
],
"name": "Vlan100"
}
],
"changed": true,
"commands": [
"interface Vlan100",
"ipv6 address 2001:DB8::2/32"
],
"invocation": {
"module_args": {
"config": [
{
"ipv6": [
{
"address": "2001:DB8::2/32"
}
],
"name": "Vlan100"
}
],
"state": "replaced"
}
}
}
```
|
https://github.com/ansible/ansible/issues/66530
|
https://github.com/ansible/ansible/pull/66654
|
ebf21bb48d565b1a860fa7be9b1149a18f52a7da
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
| 2020-01-16T13:49:43Z |
python
| 2020-01-22T08:05:51Z |
test/integration/targets/ios_l3_interfaces/tests/cli/replaced.yaml
|
---
- debug:
msg: "START Replaced ios_l3_interfaces state for integration tests on connection={{ ansible_connection }}"
- include_tasks: _remove_config.yaml
- include_tasks: _populate_config.yaml
- block:
- name: Replaces device configuration of listed interfaces with provided configuration
ios_l3_interfaces: &replaced
config:
- name: GigabitEthernet0/1
ipv4:
- address: 203.0.114.1/24
- name: GigabitEthernet0/2
ipv4:
- address: 198.51.100.1/24
secondary: True
- address: 198.51.100.2/24
state: replaced
register: result
- name: Assert that correct set of commands were generated
assert:
that:
- "{{ replaced['commands'] | symmetric_difference(result['commands']) | length == 0 }}"
- name: Assert that before dicts are correctly generated
assert:
that:
- "{{ replaced['before'] | symmetric_difference(result['before']) | length == 0 }}"
- name: Assert that after dict is correctly generated
assert:
that:
- "{{ replaced['after'] | symmetric_difference(result['after']) | length == 0 }}"
- name: Replaces device configuration of listed interfaces with provided configuration (IDEMPOTENT)
ios_l3_interfaces: *replaced
register: result
- name: Assert that task was idempotent
assert:
that:
- "result['changed'] == false"
always:
- include_tasks: _remove_config.yaml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,530 |
Module ios_l3_interfaces doesn't set ipv6 correctly when state replaced
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
New network modules were added in ansible version 2.9. While testing I came across a bug. More precisely module **ios_l3_interfaces** doesn't set ipv6 addresses correctly when state set to **replaced**. Instead of replacing IP, module just adds it (same behavior as when state is set to merged).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_l3_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Nothing that would affect module logic
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Host = Debian 9, python 2.7.13
Target = Cisco WS-C4500X-16 Version 03.11.00.E
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Before config:
```
interface Vlan100
ipv6 address 2001:DB8::1/32
ipv6 enable
```
Ansible playbook example:
```yaml
- name: Configure IPv6
hosts: cisco4500
gather_facts: yes
tasks:
- ios_l3_interfaces:
config:
- name: Vlan100
ipv6:
- address: 2001:DB8::2/32
state: replaced
```
##### EXPECTED RESULTS
I expect module to change IP address from 2001:DB8::1/32 to 2001:DB8::2/32
It works like this for ipv4 (same module) and for ipv4/ipv6 in junos_l3_interfaces.
Expected after config:
```
interface Vlan100
ipv6 address 2001:DB8::2/32
ipv6 enable
```
##### ACTUAL RESULTS
It merges instead of replaces.
Actual after config:
```
interface Vlan100
ipv6 address 2001:DB8::1/32
ipv6 address 2001:DB8::2/32
ipv6 enable
```
Ansible verbose output:
```
changed: [cisco4500] => {
"after": [
{
"name": "loopback0"
},
{
"name": "FastEthernet1"
},
{
"name": "TenGigabitEthernet1/1"
},
{
"name": "TenGigabitEthernet1/2"
},
{
"name": "TenGigabitEthernet1/3"
},
{
"name": "TenGigabitEthernet1/4"
},
{
"name": "TenGigabitEthernet1/5"
},
{
"name": "TenGigabitEthernet1/6"
},
{
"name": "TenGigabitEthernet1/7"
},
{
"name": "TenGigabitEthernet1/8"
},
{
"name": "TenGigabitEthernet1/9"
},
{
"name": "TenGigabitEthernet1/10"
},
{
"name": "TenGigabitEthernet1/11"
},
{
"name": "TenGigabitEthernet1/12"
},
{
"name": "TenGigabitEthernet1/13"
},
{
"name": "TenGigabitEthernet1/14"
},
{
"name": "TenGigabitEthernet1/15"
},
{
"name": "TenGigabitEthernet1/16"
},
{
"ipv6": [
{
"address": "2001:DB8::1/32"
},
{
"address": "2001:DB8::2/32"
}
],
"name": "Vlan100"
}
],
"before": [
{
"name": "loopback0"
},
{
"name": "FastEthernet1"
},
{
"name": "TenGigabitEthernet1/1"
},
{
"name": "TenGigabitEthernet1/2"
},
{
"name": "TenGigabitEthernet1/3"
},
{
"name": "TenGigabitEthernet1/4"
},
{
"name": "TenGigabitEthernet1/5"
},
{
"name": "TenGigabitEthernet1/6"
},
{
"name": "TenGigabitEthernet1/7"
},
{
"name": "TenGigabitEthernet1/8"
},
{
"name": "TenGigabitEthernet1/9"
},
{
"name": "TenGigabitEthernet1/10"
},
{
"name": "TenGigabitEthernet1/11"
},
{
"name": "TenGigabitEthernet1/12"
},
{
"name": "TenGigabitEthernet1/13"
},
{
"name": "TenGigabitEthernet1/14"
},
{
"name": "TenGigabitEthernet1/15"
},
{
"name": "TenGigabitEthernet1/16"
},
{
"ipv6": [
{
"address": "2001:DB8::1/32"
}
],
"name": "Vlan100"
}
],
"changed": true,
"commands": [
"interface Vlan100",
"ipv6 address 2001:DB8::2/32"
],
"invocation": {
"module_args": {
"config": [
{
"ipv6": [
{
"address": "2001:DB8::2/32"
}
],
"name": "Vlan100"
}
],
"state": "replaced"
}
}
}
```
|
https://github.com/ansible/ansible/issues/66530
|
https://github.com/ansible/ansible/pull/66654
|
ebf21bb48d565b1a860fa7be9b1149a18f52a7da
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
| 2020-01-16T13:49:43Z |
python
| 2020-01-22T08:05:51Z |
test/integration/targets/ios_l3_interfaces/vars/main.yaml
|
---
merged:
before:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- name: GigabitEthernet0/1
- name: GigabitEthernet0/2
commands:
- "interface GigabitEthernet0/1"
- "ip address dhcp client-id GigabitEthernet 0/0 hostname test.com"
- "interface GigabitEthernet0/2"
- "ip address 198.51.100.1 255.255.255.0 secondary"
- "ip address 198.51.100.2 255.255.255.0"
- "ipv6 address 2001:db8:0:3::/64"
after:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- ipv4:
- address: dhcp
dhcp_client: 0
dhcp_hostname: test.com
name: GigabitEthernet0/1
- ipv4:
- address: 198.51.100.1 255.255.255.0
secondary: true
- address: 198.51.100.2 255.255.255.0
ipv6:
- address: 2001:db8:0:3::/64
name: GigabitEthernet0/2
replaced:
before:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- ipv4:
- address: 203.0.113.27 255.255.255.0
name: GigabitEthernet0/1
- ipv4:
- address: 192.0.2.1 255.255.255.0
secondary: true
- address: 192.0.2.2 255.255.255.0
ipv6:
- address: 2001:db8:0:3::/64
name: GigabitEthernet0/2
commands:
- "interface GigabitEthernet0/1"
- "ip address 203.0.114.1 255.255.255.0"
- "interface GigabitEthernet0/2"
- "no ip address"
- "no ipv6 address"
- "ip address 198.51.100.1 255.255.255.0 secondary"
- "ip address 198.51.100.2 255.255.255.0"
after:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- ipv4:
- address: 203.0.114.1 255.255.255.0
name: GigabitEthernet0/1
- ipv4:
- address: 198.51.100.1 255.255.255.0
secondary: true
- address: 198.51.100.2 255.255.255.0
name: GigabitEthernet0/2
overridden:
before:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- ipv4:
- address: 203.0.113.27 255.255.255.0
name: GigabitEthernet0/1
- ipv4:
- address: 192.0.2.1 255.255.255.0
secondary: true
- address: 192.0.2.2 255.255.255.0
ipv6:
- address: 2001:db8:0:3::/64
name: GigabitEthernet0/2
commands:
- "interface GigabitEthernet0/1"
- "no ip address"
- "interface GigabitEthernet0/2"
- "no ip address"
- "no ipv6 address"
- "ip address 198.51.100.1 255.255.255.0"
- "ip address 198.51.100.2 255.255.255.0 secondary"
after:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- name: GigabitEthernet0/1
- ipv4:
- address: 198.51.100.2 255.255.255.0
secondary: true
- address: 198.51.100.1 255.255.255.0
name: GigabitEthernet0/2
deleted:
before:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- ipv4:
- address: 203.0.113.27 255.255.255.0
name: GigabitEthernet0/1
- ipv4:
- address: 192.0.2.1 255.255.255.0
secondary: true
- address: 192.0.2.2 255.255.255.0
ipv6:
- address: 2001:db8:0:3::/64
name: GigabitEthernet0/2
commands:
- "interface GigabitEthernet0/1"
- "no ip address"
- "interface GigabitEthernet0/2"
- "no ip address"
- "no ipv6 address"
after:
- name: loopback888
- name: loopback999
- ipv4:
- address: dhcp
name: GigabitEthernet0/0
- name: GigabitEthernet0/1
- name: GigabitEthernet0/2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,365 |
Traceback with "debug var=vars" when jinja2_native is true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Traceback with `debug var=vars` when `ANSIBLE_JINJA2_NATIVE` is `true`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/vars/manager.py
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
No config changes.
```
$ ansible-config dump --only-changed
$
```
##### OS / ENVIRONMENT
Ubuntu 18.04.2 LTS
- running playbook on localhost
##### STEPS TO REPRODUCE
Command:
```
ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
````
Playbook `bug-playbook.yml`:
```yaml
- hosts: all
gather_facts: no
tasks:
- name: show vars
debug: var=vars
- name: show hostvars
debug: var=hostvars
```
##### EXPECTED RESULTS
`debug` task shows vars output normally like this:
```
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
ok: [localhost] => {
"vars": {
"ansible_check_mode": false,
...
```
##### ACTUAL RESULTS
Traceback from `debug var` on `vars` and `hostvars`:
```
$ ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvvv
ansible-playbook 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: bug-playbook.yml *****************************************************************************************************************************************************************************************************************
Positional arguments: bug-playbook.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in bug-playbook.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (<ansible.plugins.callback.default.CallbackModule object at 0x7f88cf423cf8>): 'VariableManager' object has no attribute '_loader'
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
TASK [show hostvars] ***********************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:8
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Running without the jinja2_native=true option does not have this problem (Normal output omitted).
```
$ ANSIBLE_JINJA2_NATIVE=false ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
```
Workarounds:
- best option: set jinja2_native to false
- if you don't need jinja2_native for some playbooks, downgrade jinja2 to 2.9.6
Have tested this with 2.10.0 to 2.10.3, which all show this issue.
Related
---
#64745 looks similar but isn't the same, as the repro is different, and PR from @mkrizek with fix is included in 2.9.1.
- includes [this comment](https://github.com/ansible/ansible/issues/64745#issuecomment-553169757) with some related issues.
|
https://github.com/ansible/ansible/issues/65365
|
https://github.com/ansible/ansible/pull/65508
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
|
ec371eb2277891bd9c5b463059730f3012c8ad06
| 2019-11-29T13:59:00Z |
python
| 2020-01-22T10:57:09Z |
changelogs/fragments/65365-fix-tb-printing-hostvars.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,365 |
Traceback with "debug var=vars" when jinja2_native is true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Traceback with `debug var=vars` when `ANSIBLE_JINJA2_NATIVE` is `true`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/vars/manager.py
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
No config changes.
```
$ ansible-config dump --only-changed
$
```
##### OS / ENVIRONMENT
Ubuntu 18.04.2 LTS
- running playbook on localhost
##### STEPS TO REPRODUCE
Command:
```
ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
````
Playbook `bug-playbook.yml`:
```yaml
- hosts: all
gather_facts: no
tasks:
- name: show vars
debug: var=vars
- name: show hostvars
debug: var=hostvars
```
##### EXPECTED RESULTS
`debug` task shows vars output normally like this:
```
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
ok: [localhost] => {
"vars": {
"ansible_check_mode": false,
...
```
##### ACTUAL RESULTS
Traceback from `debug var` on `vars` and `hostvars`:
```
$ ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvvv
ansible-playbook 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: bug-playbook.yml *****************************************************************************************************************************************************************************************************************
Positional arguments: bug-playbook.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in bug-playbook.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (<ansible.plugins.callback.default.CallbackModule object at 0x7f88cf423cf8>): 'VariableManager' object has no attribute '_loader'
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
TASK [show hostvars] ***********************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:8
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Running without the jinja2_native=true option does not have this problem (Normal output omitted).
```
$ ANSIBLE_JINJA2_NATIVE=false ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
```
Workarounds:
- best option: set jinja2_native to false
- if you don't need jinja2_native for some playbooks, downgrade jinja2 to 2.9.6
Have tested this with 2.10.0 to 2.10.3, which all show this issue.
Related
---
#64745 looks similar but isn't the same, as the repro is different, and PR from @mkrizek with fix is included in 2.9.1.
- includes [this comment](https://github.com/ansible/ansible/issues/64745#issuecomment-553169757) with some related issues.
|
https://github.com/ansible/ansible/issues/65365
|
https://github.com/ansible/ansible/pull/65508
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
|
ec371eb2277891bd9c5b463059730f3012c8ad06
| 2019-11-29T13:59:00Z |
python
| 2020-01-22T10:57:09Z |
lib/ansible/vars/hostvars.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.common._collections_compat import Mapping
from ansible.template import Templar, AnsibleUndefined
STATIC_VARS = [
'ansible_version',
'ansible_play_hosts',
'ansible_dependent_role_names',
'ansible_play_role_names',
'ansible_role_names',
'inventory_hostname',
'inventory_hostname_short',
'inventory_file',
'inventory_dir',
'groups',
'group_names',
'omit',
'playbook_dir',
'play_hosts',
'role_names',
'ungrouped',
]
__all__ = ['HostVars', 'HostVarsVars']
# Note -- this is a Mapping, not a MutableMapping
class HostVars(Mapping):
''' A special view of vars_cache that adds values from the inventory when needed. '''
def __init__(self, inventory, variable_manager, loader):
self._lookup = dict()
self._inventory = inventory
self._loader = loader
self._variable_manager = variable_manager
variable_manager._hostvars = self
def set_variable_manager(self, variable_manager):
self._variable_manager = variable_manager
variable_manager._hostvars = self
def set_inventory(self, inventory):
self._inventory = inventory
def _find_host(self, host_name):
# does not use inventory.hosts so it can create localhost on demand
return self._inventory.get_host(host_name)
def raw_get(self, host_name):
'''
Similar to __getitem__, however the returned data is not run through
the templating engine to expand variables in the hostvars.
'''
host = self._find_host(host_name)
if host is None:
return AnsibleUndefined(name="hostvars['%s']" % host_name)
return self._variable_manager.get_vars(host=host, include_hostvars=False)
def __getitem__(self, host_name):
data = self.raw_get(host_name)
if isinstance(data, AnsibleUndefined):
return data
return HostVarsVars(data, loader=self._loader)
def set_host_variable(self, host, varname, value):
self._variable_manager.set_host_variable(host, varname, value)
def set_nonpersistent_facts(self, host, facts):
self._variable_manager.set_nonpersistent_facts(host, facts)
def set_host_facts(self, host, facts):
self._variable_manager.set_host_facts(host, facts)
def __contains__(self, host_name):
# does not use inventory.hosts so it can create localhost on demand
return self._find_host(host_name) is not None
def __iter__(self):
for host in self._inventory.hosts:
yield host
def __len__(self):
return len(self._inventory.hosts)
def __repr__(self):
out = {}
for host in self._inventory.hosts:
out[host] = self.get(host)
return repr(out)
def __deepcopy__(self, memo):
# We do not need to deepcopy because HostVars is immutable,
# however we have to implement the method so we can deepcopy
# variables' dicts that contain HostVars.
return self
class HostVarsVars(Mapping):
def __init__(self, variables, loader):
self._vars = variables
self._loader = loader
def __getitem__(self, var):
templar = Templar(variables=self._vars, loader=self._loader)
foo = templar.template(self._vars[var], fail_on_undefined=False, static_vars=STATIC_VARS)
return foo
def __contains__(self, var):
return (var in self._vars)
def __iter__(self):
for var in self._vars.keys():
yield var
def __len__(self):
return len(self._vars.keys())
def __repr__(self):
templar = Templar(variables=self._vars, loader=self._loader)
return repr(templar.template(self._vars, fail_on_undefined=False, static_vars=STATIC_VARS))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,365 |
Traceback with "debug var=vars" when jinja2_native is true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Traceback with `debug var=vars` when `ANSIBLE_JINJA2_NATIVE` is `true`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/vars/manager.py
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
No config changes.
```
$ ansible-config dump --only-changed
$
```
##### OS / ENVIRONMENT
Ubuntu 18.04.2 LTS
- running playbook on localhost
##### STEPS TO REPRODUCE
Command:
```
ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
````
Playbook `bug-playbook.yml`:
```yaml
- hosts: all
gather_facts: no
tasks:
- name: show vars
debug: var=vars
- name: show hostvars
debug: var=hostvars
```
##### EXPECTED RESULTS
`debug` task shows vars output normally like this:
```
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
ok: [localhost] => {
"vars": {
"ansible_check_mode": false,
...
```
##### ACTUAL RESULTS
Traceback from `debug var` on `vars` and `hostvars`:
```
$ ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvvv
ansible-playbook 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: bug-playbook.yml *****************************************************************************************************************************************************************************************************************
Positional arguments: bug-playbook.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in bug-playbook.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (<ansible.plugins.callback.default.CallbackModule object at 0x7f88cf423cf8>): 'VariableManager' object has no attribute '_loader'
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
TASK [show hostvars] ***********************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:8
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Running without the jinja2_native=true option does not have this problem (Normal output omitted).
```
$ ANSIBLE_JINJA2_NATIVE=false ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
```
Workarounds:
- best option: set jinja2_native to false
- if you don't need jinja2_native for some playbooks, downgrade jinja2 to 2.9.6
Have tested this with 2.10.0 to 2.10.3, which all show this issue.
Related
---
#64745 looks similar but isn't the same, as the repro is different, and PR from @mkrizek with fix is included in 2.9.1.
- includes [this comment](https://github.com/ansible/ansible/issues/64745#issuecomment-553169757) with some related issues.
|
https://github.com/ansible/ansible/issues/65365
|
https://github.com/ansible/ansible/pull/65508
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
|
ec371eb2277891bd9c5b463059730f3012c8ad06
| 2019-11-29T13:59:00Z |
python
| 2020-01-22T10:57:09Z |
lib/ansible/vars/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from collections import defaultdict
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleFileNotFound, AnsibleAssertionError, AnsibleTemplateError
from ansible.inventory.host import Host
from ansible.inventory.helpers import sort_groups, get_group_vars
from ansible.module_utils._text import to_text
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping, Sequence
from ansible.module_utils.six import iteritems, text_type, string_types
from ansible.plugins.loader import lookup_loader
from ansible.vars.fact_cache import FactCache
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.vars import combine_vars, load_extra_vars, load_options_vars
from ansible.utils.unsafe_proxy import wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path
display = Display()
def preprocess_vars(a):
'''
Ensures that vars contained in the parameter passed in are
returned as a list of dictionaries, to ensure for instance
that vars loaded from a file conform to an expected state.
'''
if a is None:
return None
elif not isinstance(a, list):
data = [a]
else:
data = a
for item in data:
if not isinstance(item, MutableMapping):
raise AnsibleError("variable files must contain either a dictionary of variables, or a list of dictionaries. Got: %s (%s)" % (a, type(a)))
return data
class VariableManager:
_ALLOWED = frozenset(['plugins_by_group', 'groups_plugins_play', 'groups_plugins_inventory', 'groups_inventory',
'all_plugins_play', 'all_plugins_inventory', 'all_inventory'])
def __init__(self, loader=None, inventory=None, version_info=None):
self._nonpersistent_fact_cache = defaultdict(dict)
self._vars_cache = defaultdict(dict)
self._extra_vars = defaultdict(dict)
self._host_vars_files = defaultdict(dict)
self._group_vars_files = defaultdict(dict)
self._inventory = inventory
self._loader = loader
self._hostvars = None
self._omit_token = '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest()
self._options_vars = load_options_vars(version_info)
# If the basedir is specified as the empty string then it results in cwd being used.
# This is not a safe location to load vars from.
basedir = self._options_vars.get('basedir', False)
self.safe_basedir = bool(basedir is False or basedir)
# load extra vars
self._extra_vars = load_extra_vars(loader=self._loader)
# load fact cache
try:
self._fact_cache = FactCache()
except AnsibleError as e:
# bad cache plugin is not fatal error
# fallback to a dict as in memory cache
display.warning(to_text(e))
self._fact_cache = {}
def __getstate__(self):
data = dict(
fact_cache=self._fact_cache,
np_fact_cache=self._nonpersistent_fact_cache,
vars_cache=self._vars_cache,
extra_vars=self._extra_vars,
host_vars_files=self._host_vars_files,
group_vars_files=self._group_vars_files,
omit_token=self._omit_token,
options_vars=self._options_vars,
inventory=self._inventory,
safe_basedir=self.safe_basedir,
)
return data
def __setstate__(self, data):
self._fact_cache = data.get('fact_cache', defaultdict(dict))
self._nonpersistent_fact_cache = data.get('np_fact_cache', defaultdict(dict))
self._vars_cache = data.get('vars_cache', defaultdict(dict))
self._extra_vars = data.get('extra_vars', dict())
self._host_vars_files = data.get('host_vars_files', defaultdict(dict))
self._group_vars_files = data.get('group_vars_files', defaultdict(dict))
self._omit_token = data.get('omit_token', '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest())
self._inventory = data.get('inventory', None)
self._options_vars = data.get('options_vars', dict())
self.safe_basedir = data.get('safe_basedir', False)
@property
def extra_vars(self):
return self._extra_vars
def set_inventory(self, inventory):
self._inventory = inventory
def get_vars(self, play=None, host=None, task=None, include_hostvars=True, include_delegate_to=True, use_cache=True,
_hosts=None, _hosts_all=None, stage='task'):
'''
Returns the variables, with optional "context" given via the parameters
for the play, host, and task (which could possibly result in different
sets of variables being returned due to the additional context).
The order of precedence is:
- play->roles->get_default_vars (if there is a play context)
- group_vars_files[host] (if there is a host context)
- host_vars_files[host] (if there is a host context)
- host->get_vars (if there is a host context)
- fact_cache[host] (if there is a host context)
- play vars (if there is a play context)
- play vars_files (if there's no host context, ignore
file names that cannot be templated)
- task->get_vars (if there is a task context)
- vars_cache[host] (if there is a host context)
- extra vars
``_hosts`` and ``_hosts_all`` should be considered private args, with only internal trusted callers relying
on the functionality they provide. These arguments may be removed at a later date without a deprecation
period and without warning.
'''
display.debug("in VariableManager get_vars()")
all_vars = dict()
magic_variables = self._get_magic_variables(
play=play,
host=host,
task=task,
include_hostvars=include_hostvars,
include_delegate_to=include_delegate_to,
_hosts=_hosts,
_hosts_all=_hosts_all,
)
_vars_sources = {}
def _combine_and_track(data, new_data, source):
'''
Wrapper function to update var sources dict and call combine_vars()
See notes in the VarsWithSources docstring for caveats and limitations of the source tracking
'''
if C.DEFAULT_DEBUG:
# Populate var sources dict
for key in new_data:
_vars_sources[key] = source
return combine_vars(data, new_data)
# default for all cases
basedirs = []
if self.safe_basedir: # avoid adhoc/console loading cwd
basedirs = [self._loader.get_basedir()]
if play:
# first we compile any vars specified in defaults/main.yml
# for all roles within the specified play
for role in play.get_roles():
all_vars = _combine_and_track(all_vars, role.get_default_vars(), "role '%s' defaults" % role.name)
if task:
# set basedirs
if C.PLAYBOOK_VARS_ROOT == 'all': # should be default
basedirs = task.get_search_path()
elif C.PLAYBOOK_VARS_ROOT in ('bottom', 'playbook_dir'): # only option in 2.4.0
basedirs = [task.get_search_path()[0]]
elif C.PLAYBOOK_VARS_ROOT != 'top':
# preserves default basedirs, only option pre 2.3
raise AnsibleError('Unknown playbook vars logic: %s' % C.PLAYBOOK_VARS_ROOT)
# if we have a task in this context, and that task has a role, make
# sure it sees its defaults above any other roles, as we previously
# (v1) made sure each task had a copy of its roles default vars
if task._role is not None and (play or task.action == 'include_role'):
all_vars = _combine_and_track(all_vars, task._role.get_default_vars(dep_chain=task.get_dep_chain()),
"role '%s' defaults" % task._role.name)
if host:
# THE 'all' group and the rest of groups for a host, used below
all_group = self._inventory.groups.get('all')
host_groups = sort_groups([g for g in host.get_groups() if g.name not in ['all']])
def _get_plugin_vars(plugin, path, entities):
data = {}
try:
data = plugin.get_vars(self._loader, path, entities)
except AttributeError:
try:
for entity in entities:
if isinstance(entity, Host):
data.update(plugin.get_host_vars(entity.name))
else:
data.update(plugin.get_group_vars(entity.name))
except AttributeError:
if hasattr(plugin, 'run'):
raise AnsibleError("Cannot use v1 type vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
else:
raise AnsibleError("Invalid vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
return data
# internal functions that actually do the work
def _plugins_inventory(entities):
''' merges all entities by inventory source '''
return get_vars_from_inventory_sources(self._loader, self._inventory._sources, entities, stage)
def _plugins_play(entities):
''' merges all entities adjacent to play '''
data = {}
for path in basedirs:
data = _combine_and_track(data, get_vars_from_path(self._loader, path, entities, stage), "path '%s'" % path)
return data
# configurable functions that are sortable via config, remember to add to _ALLOWED if expanding this list
def all_inventory():
return all_group.get_vars()
def all_plugins_inventory():
return _plugins_inventory([all_group])
def all_plugins_play():
return _plugins_play([all_group])
def groups_inventory():
''' gets group vars from inventory '''
return get_group_vars(host_groups)
def groups_plugins_inventory():
''' gets plugin sources from inventory for groups '''
return _plugins_inventory(host_groups)
def groups_plugins_play():
''' gets plugin sources from play for groups '''
return _plugins_play(host_groups)
def plugins_by_groups():
'''
merges all plugin sources by group,
This should be used instead, NOT in combination with the other groups_plugins* functions
'''
data = {}
for group in host_groups:
data[group] = _combine_and_track(data[group], _plugins_inventory(group), "inventory group_vars for '%s'" % group)
data[group] = _combine_and_track(data[group], _plugins_play(group), "playbook group_vars for '%s'" % group)
return data
# Merge groups as per precedence config
# only allow to call the functions we want exposed
for entry in C.VARIABLE_PRECEDENCE:
if entry in self._ALLOWED:
display.debug('Calling %s to load vars for %s' % (entry, host.name))
all_vars = _combine_and_track(all_vars, locals()[entry](), "group vars, precedence entry '%s'" % entry)
else:
display.warning('Ignoring unknown variable precedence entry: %s' % (entry))
# host vars, from inventory, inventory adjacent and play adjacent via plugins
all_vars = _combine_and_track(all_vars, host.get_vars(), "host vars for '%s'" % host)
all_vars = _combine_and_track(all_vars, _plugins_inventory([host]), "inventory host_vars for '%s'" % host)
all_vars = _combine_and_track(all_vars, _plugins_play([host]), "playbook host_vars for '%s'" % host)
# finally, the facts caches for this host, if it exists
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
try:
facts = wrap_var(self._fact_cache.get(host.name, {}))
all_vars.update(namespace_facts(facts))
# push facts to main namespace
if C.INJECT_FACTS_AS_VARS:
all_vars = _combine_and_track(all_vars, wrap_var(clean_facts(facts)), "facts")
else:
# always 'promote' ansible_local
all_vars = _combine_and_track(all_vars, wrap_var({'ansible_local': facts.get('ansible_local', {})}), "facts")
except KeyError:
pass
if play:
all_vars = _combine_and_track(all_vars, play.get_vars(), "play vars")
vars_files = play.get_vars_files()
try:
for vars_file_item in vars_files:
# create a set of temporary vars here, which incorporate the extra
# and magic vars so we can properly template the vars_files entries
temp_vars = combine_vars(all_vars, self._extra_vars)
temp_vars = combine_vars(temp_vars, magic_variables)
templar = Templar(loader=self._loader, variables=temp_vars)
# we assume each item in the list is itself a list, as we
# support "conditional includes" for vars_files, which mimics
# the with_first_found mechanism.
vars_file_list = vars_file_item
if not isinstance(vars_file_list, list):
vars_file_list = [vars_file_list]
# now we iterate through the (potential) files, and break out
# as soon as we read one from the list. If none are found, we
# raise an error, which is silently ignored at this point.
try:
for vars_file in vars_file_list:
vars_file = templar.template(vars_file)
if not (isinstance(vars_file, Sequence)):
raise AnsibleError(
"Invalid vars_files entry found: %r\n"
"vars_files entries should be either a string type or "
"a list of string types after template expansion" % vars_file
)
try:
data = preprocess_vars(self._loader.load_from_file(vars_file, unsafe=True))
if data is not None:
for item in data:
all_vars = _combine_and_track(all_vars, item, "play vars_files from '%s'" % vars_file)
break
except AnsibleFileNotFound:
# we continue on loader failures
continue
except AnsibleParserError:
raise
else:
# if include_delegate_to is set to False, we ignore the missing
# vars file here because we're working on a delegated host
if include_delegate_to:
raise AnsibleFileNotFound("vars file %s was not found" % vars_file_item)
except (UndefinedError, AnsibleUndefinedVariable):
if host is not None and self._fact_cache.get(host.name, dict()).get('module_setup') and task is not None:
raise AnsibleUndefinedVariable("an undefined variable was found when attempting to template the vars_files item '%s'"
% vars_file_item, obj=vars_file_item)
else:
# we do not have a full context here, and the missing variable could be because of that
# so just show a warning and continue
display.vvv("skipping vars_file '%s' due to an undefined variable" % vars_file_item)
continue
display.vvv("Read vars_file '%s'" % vars_file_item)
except TypeError:
raise AnsibleParserError("Error while reading vars files - please supply a list of file names. "
"Got '%s' of type %s" % (vars_files, type(vars_files)))
# By default, we now merge in all vars from all roles in the play,
# unless the user has disabled this via a config option
if not C.DEFAULT_PRIVATE_ROLE_VARS:
for role in play.get_roles():
all_vars = _combine_and_track(all_vars, role.get_vars(include_params=False), "role '%s' vars" % role.name)
# next, we merge in the vars from the role, which will specifically
# follow the role dependency chain, and then we merge in the tasks
# vars (which will look at parent blocks/task includes)
if task:
if task._role:
all_vars = _combine_and_track(all_vars, task._role.get_vars(task.get_dep_chain(), include_params=False),
"role '%s' vars" % task._role.name)
all_vars = _combine_and_track(all_vars, task.get_vars(), "task vars")
# next, we merge in the vars cache (include vars) and nonpersistent
# facts cache (set_fact/register), in that order
if host:
# include_vars non-persistent cache
all_vars = _combine_and_track(all_vars, self._vars_cache.get(host.get_name(), dict()), "include_vars")
# fact non-persistent cache
all_vars = _combine_and_track(all_vars, self._nonpersistent_fact_cache.get(host.name, dict()), "set_fact")
# next, we merge in role params and task include params
if task:
if task._role:
all_vars = _combine_and_track(all_vars, task._role.get_role_params(task.get_dep_chain()), "role '%s' params" % task._role.name)
# special case for include tasks, where the include params
# may be specified in the vars field for the task, which should
# have higher precedence than the vars/np facts above
all_vars = _combine_and_track(all_vars, task.get_include_params(), "include params")
# extra vars
all_vars = _combine_and_track(all_vars, self._extra_vars, "extra vars")
# magic variables
all_vars = _combine_and_track(all_vars, magic_variables, "magic vars")
# special case for the 'environment' magic variable, as someone
# may have set it as a variable and we don't want to stomp on it
if task:
all_vars['environment'] = task.environment
# if we have a task and we're delegating to another host, figure out the
# variables for that host now so we don't have to rely on hostvars later
if task and task.delegate_to is not None and include_delegate_to:
all_vars['ansible_delegated_vars'], all_vars['_ansible_loop_cache'] = self._get_delegated_vars(play, task, all_vars)
# 'vars' magic var
if task or play:
# has to be copy, otherwise recursive ref
all_vars['vars'] = all_vars.copy()
display.debug("done with get_vars()")
if C.DEFAULT_DEBUG:
# Use VarsWithSources wrapper class to display var sources
return VarsWithSources.new_vars_with_sources(all_vars, _vars_sources)
else:
return all_vars
def _get_magic_variables(self, play, host, task, include_hostvars, include_delegate_to,
_hosts=None, _hosts_all=None):
'''
Returns a dictionary of so-called "magic" variables in Ansible,
which are special variables we set internally for use.
'''
variables = {}
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
variables['ansible_playbook_python'] = sys.executable
if play:
# This is a list of all role names of all dependencies for all roles for this play
dependency_role_names = list(set([d._role_name for r in play.roles for d in r.get_all_dependencies()]))
# This is a list of all role names of all roles for this play
play_role_names = [r._role_name for r in play.roles]
# ansible_role_names includes all role names, dependent or directly referenced by the play
variables['ansible_role_names'] = list(set(dependency_role_names + play_role_names))
# ansible_play_role_names includes the names of all roles directly referenced by this play
# roles that are implicitly referenced via dependencies are not listed.
variables['ansible_play_role_names'] = play_role_names
# ansible_dependent_role_names includes the names of all roles that are referenced via dependencies
# dependencies that are also explicitly named as roles are included in this list
variables['ansible_dependent_role_names'] = dependency_role_names
# DEPRECATED: role_names should be deprecated in favor of ansible_role_names or ansible_play_role_names
variables['role_names'] = variables['ansible_play_role_names']
variables['ansible_play_name'] = play.get_name()
if task:
if task._role:
variables['role_name'] = task._role.get_name()
variables['role_path'] = task._role._role_path
variables['role_uuid'] = text_type(task._role._uuid)
if self._inventory is not None:
variables['groups'] = self._inventory.get_groups_dict()
if play:
templar = Templar(loader=self._loader)
if templar.is_template(play.hosts):
pattern = 'all'
else:
pattern = play.hosts or 'all'
# add the list of hosts in the play, as adjusted for limit/filters
if not _hosts_all:
_hosts_all = [h.name for h in self._inventory.get_hosts(pattern=pattern, ignore_restrictions=True)]
if not _hosts:
_hosts = [h.name for h in self._inventory.get_hosts()]
variables['ansible_play_hosts_all'] = _hosts_all[:]
variables['ansible_play_hosts'] = [x for x in variables['ansible_play_hosts_all'] if x not in play._removed_hosts]
variables['ansible_play_batch'] = [x for x in _hosts if x not in play._removed_hosts]
# DEPRECATED: play_hosts should be deprecated in favor of ansible_play_batch,
# however this would take work in the templating engine, so for now we'll add both
variables['play_hosts'] = variables['ansible_play_batch']
# the 'omit' value allows params to be left out if the variable they are based on is undefined
variables['omit'] = self._omit_token
# Set options vars
for option, option_value in iteritems(self._options_vars):
variables[option] = option_value
if self._hostvars is not None and include_hostvars:
variables['hostvars'] = self._hostvars
return variables
def _get_delegated_vars(self, play, task, existing_variables):
if not hasattr(task, 'loop'):
# This "task" is not a Task, so we need to skip it
return {}, None
# we unfortunately need to template the delegate_to field here,
# as we're fetching vars before post_validate has been called on
# the task that has been passed in
vars_copy = existing_variables.copy()
templar = Templar(loader=self._loader, variables=vars_copy)
items = []
has_loop = True
if task.loop_with is not None:
if task.loop_with in lookup_loader:
try:
loop_terms = listify_lookup_plugin_terms(terms=task.loop, templar=templar,
loader=self._loader, fail_on_undefined=True, convert_bare=False)
items = wrap_var(lookup_loader.get(task.loop_with, loader=self._loader, templar=templar).run(terms=loop_terms, variables=vars_copy))
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
raise AnsibleError("Failed to find the lookup named '%s' in the available lookup plugins" % task.loop_with)
elif task.loop is not None:
try:
items = templar.template(task.loop)
except AnsibleTemplateError:
# This task will be skipped later due to this, so we just setup
# a dummy array for the later code so it doesn't fail
items = [None]
else:
has_loop = False
items = [None]
delegated_host_vars = dict()
item_var = getattr(task.loop_control, 'loop_var', 'item')
cache_items = False
for item in items:
# update the variables with the item value for templating, in case we need it
if item is not None:
vars_copy[item_var] = item
templar.available_variables = vars_copy
delegated_host_name = templar.template(task.delegate_to, fail_on_undefined=False)
if delegated_host_name != task.delegate_to:
cache_items = True
if delegated_host_name is None:
raise AnsibleError(message="Undefined delegate_to host for task:", obj=task._ds)
if not isinstance(delegated_host_name, string_types):
raise AnsibleError(message="the field 'delegate_to' has an invalid type (%s), and could not be"
" converted to a string type." % type(delegated_host_name),
obj=task._ds)
if delegated_host_name in delegated_host_vars:
# no need to repeat ourselves, as the delegate_to value
# does not appear to be tied to the loop item variable
continue
# a dictionary of variables to use if we have to create a new host below
# we set the default port based on the default transport here, to make sure
# we use the proper default for windows
new_port = C.DEFAULT_REMOTE_PORT
if C.DEFAULT_TRANSPORT == 'winrm':
new_port = 5986
new_delegated_host_vars = dict(
ansible_delegated_host=delegated_host_name,
ansible_host=delegated_host_name, # not redundant as other sources can change ansible_host
ansible_port=new_port,
ansible_user=C.DEFAULT_REMOTE_USER,
ansible_connection=C.DEFAULT_TRANSPORT,
)
# now try to find the delegated-to host in inventory, or failing that,
# create a new host on the fly so we can fetch variables for it
delegated_host = None
if self._inventory is not None:
delegated_host = self._inventory.get_host(delegated_host_name)
# try looking it up based on the address field, and finally
# fall back to creating a host on the fly to use for the var lookup
if delegated_host is None:
if delegated_host_name in C.LOCALHOST:
delegated_host = self._inventory.localhost
else:
for h in self._inventory.get_hosts(ignore_limits=True, ignore_restrictions=True):
# check if the address matches, or if both the delegated_to host
# and the current host are in the list of localhost aliases
if h.address == delegated_host_name:
delegated_host = h
break
else:
delegated_host = Host(name=delegated_host_name)
delegated_host.vars = combine_vars(delegated_host.vars, new_delegated_host_vars)
else:
delegated_host = Host(name=delegated_host_name)
delegated_host.vars = combine_vars(delegated_host.vars, new_delegated_host_vars)
# now we go fetch the vars for the delegated-to host and save them in our
# master dictionary of variables to be used later in the TaskExecutor/PlayContext
delegated_host_vars[delegated_host_name] = self.get_vars(
play=play,
host=delegated_host,
task=task,
include_delegate_to=False,
include_hostvars=False,
)
_ansible_loop_cache = None
if has_loop and cache_items:
# delegate_to templating produced a change, so we will cache the templated items
# in a special private hostvar
# this ensures that delegate_to+loop doesn't produce different results than TaskExecutor
# which may reprocess the loop
_ansible_loop_cache = items
return delegated_host_vars, _ansible_loop_cache
def clear_facts(self, hostname):
'''
Clears the facts for a host
'''
self._fact_cache.pop(hostname, None)
def set_host_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for host_facts should be a Mapping but is a %s" % type(facts))
try:
host_cache = self._fact_cache[host]
except KeyError:
# We get to set this as new
host_cache = facts
else:
if not isinstance(host_cache, MutableMapping):
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
' a {1}'.format(host, type(host_cache)))
# Update the existing facts
host_cache.update(facts)
# Save the facts back to the backing store
self._fact_cache[host] = host_cache
def set_nonpersistent_facts(self, host, facts):
'''
Sets or updates the given facts for a host in the fact cache.
'''
if not isinstance(facts, Mapping):
raise AnsibleAssertionError("the type of 'facts' to set for nonpersistent_facts should be a Mapping but is a %s" % type(facts))
try:
self._nonpersistent_fact_cache[host].update(facts)
except KeyError:
self._nonpersistent_fact_cache[host] = facts
def set_host_variable(self, host, varname, value):
'''
Sets a value in the vars_cache for a host.
'''
if host not in self._vars_cache:
self._vars_cache[host] = dict()
if varname in self._vars_cache[host] and isinstance(self._vars_cache[host][varname], MutableMapping) and isinstance(value, MutableMapping):
self._vars_cache[host] = combine_vars(self._vars_cache[host], {varname: value})
else:
self._vars_cache[host][varname] = value
class VarsWithSources(MutableMapping):
'''
Dict-like class for vars that also provides source information for each var
This class can only store the source for top-level vars. It does no tracking
on its own, just shows a debug message with the information that it is provided
when a particular var is accessed.
'''
def __init__(self, *args, **kwargs):
''' Dict-compatible constructor '''
self.data = dict(*args, **kwargs)
self.sources = {}
@classmethod
def new_vars_with_sources(cls, data, sources):
''' Alternate constructor method to instantiate class with sources '''
v = cls(data)
v.sources = sources
return v
def get_source(self, key):
return self.sources.get(key, None)
def __getitem__(self, key):
val = self.data[key]
# See notes in the VarsWithSources docstring for caveats and limitations of the source tracking
display.debug("variable '%s' from source: %s" % (key, self.sources.get(key, "unknown")))
return val
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(self, key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
# Prevent duplicate debug messages by defining our own __contains__ pointing at the underlying dict
def __contains__(self, key):
return self.data.__contains__(key)
def copy(self):
return VarsWithSources.new_vars_with_sources(self.data.copy(), self.sources.copy())
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,365 |
Traceback with "debug var=vars" when jinja2_native is true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Traceback with `debug var=vars` when `ANSIBLE_JINJA2_NATIVE` is `true`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/vars/manager.py
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
No config changes.
```
$ ansible-config dump --only-changed
$
```
##### OS / ENVIRONMENT
Ubuntu 18.04.2 LTS
- running playbook on localhost
##### STEPS TO REPRODUCE
Command:
```
ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
````
Playbook `bug-playbook.yml`:
```yaml
- hosts: all
gather_facts: no
tasks:
- name: show vars
debug: var=vars
- name: show hostvars
debug: var=hostvars
```
##### EXPECTED RESULTS
`debug` task shows vars output normally like this:
```
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
ok: [localhost] => {
"vars": {
"ansible_check_mode": false,
...
```
##### ACTUAL RESULTS
Traceback from `debug var` on `vars` and `hostvars`:
```
$ ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvvv
ansible-playbook 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: bug-playbook.yml *****************************************************************************************************************************************************************************************************************
Positional arguments: bug-playbook.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in bug-playbook.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (<ansible.plugins.callback.default.CallbackModule object at 0x7f88cf423cf8>): 'VariableManager' object has no attribute '_loader'
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
TASK [show hostvars] ***********************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:8
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Running without the jinja2_native=true option does not have this problem (Normal output omitted).
```
$ ANSIBLE_JINJA2_NATIVE=false ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
```
Workarounds:
- best option: set jinja2_native to false
- if you don't need jinja2_native for some playbooks, downgrade jinja2 to 2.9.6
Have tested this with 2.10.0 to 2.10.3, which all show this issue.
Related
---
#64745 looks similar but isn't the same, as the repro is different, and PR from @mkrizek with fix is included in 2.9.1.
- includes [this comment](https://github.com/ansible/ansible/issues/64745#issuecomment-553169757) with some related issues.
|
https://github.com/ansible/ansible/issues/65365
|
https://github.com/ansible/ansible/pull/65508
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
|
ec371eb2277891bd9c5b463059730f3012c8ad06
| 2019-11-29T13:59:00Z |
python
| 2020-01-22T10:57:09Z |
test/integration/targets/jinja2_native_types/inventory.jinja2_native_types
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,365 |
Traceback with "debug var=vars" when jinja2_native is true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Traceback with `debug var=vars` when `ANSIBLE_JINJA2_NATIVE` is `true`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/vars/manager.py
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
No config changes.
```
$ ansible-config dump --only-changed
$
```
##### OS / ENVIRONMENT
Ubuntu 18.04.2 LTS
- running playbook on localhost
##### STEPS TO REPRODUCE
Command:
```
ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
````
Playbook `bug-playbook.yml`:
```yaml
- hosts: all
gather_facts: no
tasks:
- name: show vars
debug: var=vars
- name: show hostvars
debug: var=hostvars
```
##### EXPECTED RESULTS
`debug` task shows vars output normally like this:
```
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
ok: [localhost] => {
"vars": {
"ansible_check_mode": false,
...
```
##### ACTUAL RESULTS
Traceback from `debug var` on `vars` and `hostvars`:
```
$ ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvvv
ansible-playbook 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: bug-playbook.yml *****************************************************************************************************************************************************************************************************************
Positional arguments: bug-playbook.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in bug-playbook.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (<ansible.plugins.callback.default.CallbackModule object at 0x7f88cf423cf8>): 'VariableManager' object has no attribute '_loader'
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
TASK [show hostvars] ***********************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:8
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Running without the jinja2_native=true option does not have this problem (Normal output omitted).
```
$ ANSIBLE_JINJA2_NATIVE=false ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
```
Workarounds:
- best option: set jinja2_native to false
- if you don't need jinja2_native for some playbooks, downgrade jinja2 to 2.9.6
Have tested this with 2.10.0 to 2.10.3, which all show this issue.
Related
---
#64745 looks similar but isn't the same, as the repro is different, and PR from @mkrizek with fix is included in 2.9.1.
- includes [this comment](https://github.com/ansible/ansible/issues/64745#issuecomment-553169757) with some related issues.
|
https://github.com/ansible/ansible/issues/65365
|
https://github.com/ansible/ansible/pull/65508
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
|
ec371eb2277891bd9c5b463059730f3012c8ad06
| 2019-11-29T13:59:00Z |
python
| 2020-01-22T10:57:09Z |
test/integration/targets/jinja2_native_types/runme.sh
|
#!/usr/bin/env bash
set -eux
ANSIBLE_JINJA2_NATIVE=1 ansible-playbook -i inventory.jinja2_native_types runtests.yml -v "$@"
ANSIBLE_JINJA2_NATIVE=1 ansible-playbook -i inventory.jinja2_native_types --vault-password-file test_vault_pass test_vault.yml -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,365 |
Traceback with "debug var=vars" when jinja2_native is true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Traceback with `debug var=vars` when `ANSIBLE_JINJA2_NATIVE` is `true`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/vars/manager.py
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
No config changes.
```
$ ansible-config dump --only-changed
$
```
##### OS / ENVIRONMENT
Ubuntu 18.04.2 LTS
- running playbook on localhost
##### STEPS TO REPRODUCE
Command:
```
ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
````
Playbook `bug-playbook.yml`:
```yaml
- hosts: all
gather_facts: no
tasks:
- name: show vars
debug: var=vars
- name: show hostvars
debug: var=hostvars
```
##### EXPECTED RESULTS
`debug` task shows vars output normally like this:
```
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
ok: [localhost] => {
"vars": {
"ansible_check_mode": false,
...
```
##### ACTUAL RESULTS
Traceback from `debug var` on `vars` and `hostvars`:
```
$ ANSIBLE_JINJA2_NATIVE=true ansible-playbook -i localhost, -c local bug-playbook.yml -vvvv
ansible-playbook 2.9.1
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/venv-main/lib/python3.7/site-packages/ansible
executable location = /home/vagrant/venv-main/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
No config file found; using defaults
setting up inventory plugins
Set default localhost to localhost
Parsed localhost, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: bug-playbook.yml *****************************************************************************************************************************************************************************************************************
Positional arguments: bug-playbook.yml
verbosity: 4
connection: local
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('localhost,',)
forks: 5
1 plays in bug-playbook.yml
PLAY [all] *********************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [show vars] ***************************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:5
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (<ansible.plugins.callback.default.CallbackModule object at 0x7f88cf423cf8>): 'VariableManager' object has no attribute '_loader'
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
TASK [show hostvars] ***********************************************************************************************************************************************************************************************************************
task path: /home/vagrant/devel/IT_Cloud/Projects/iac/tryout/bug-vars/bug-playbook.yml:8
Callback Exception:
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/default.py", line 156, in v2_runner_on_ok
msg += " => %s" % (self._dump_results(result._result),)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/plugins/callback/__init__.py", line 126, in _dump_results
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
File "/usr/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/module_utils/common/json.py", line 53, in default
value = dict(o)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 83, in __getitem__
data = self.raw_get(host_name)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/hostvars.py", line 80, in raw_get
return self._variable_manager.get_vars(host=host, include_hostvars=False)
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 178, in get_vars
_hosts_all=_hosts_all,
File "/home/vagrant/venv-main/lib/python3.7/site-packages/ansible/vars/manager.py", line 443, in _get_magic_variables
variables['playbook_dir'] = os.path.abspath(self._loader.get_basedir())
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Running without the jinja2_native=true option does not have this problem (Normal output omitted).
```
$ ANSIBLE_JINJA2_NATIVE=false ansible-playbook -i localhost, -c local bug-playbook.yml -vvv
```
Workarounds:
- best option: set jinja2_native to false
- if you don't need jinja2_native for some playbooks, downgrade jinja2 to 2.9.6
Have tested this with 2.10.0 to 2.10.3, which all show this issue.
Related
---
#64745 looks similar but isn't the same, as the repro is different, and PR from @mkrizek with fix is included in 2.9.1.
- includes [this comment](https://github.com/ansible/ansible/issues/64745#issuecomment-553169757) with some related issues.
|
https://github.com/ansible/ansible/issues/65365
|
https://github.com/ansible/ansible/pull/65508
|
0c4f167b82e8c898dd8e6d5b00fcd76aa483d875
|
ec371eb2277891bd9c5b463059730f3012c8ad06
| 2019-11-29T13:59:00Z |
python
| 2020-01-22T10:57:09Z |
test/integration/targets/jinja2_native_types/test_hostvars.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,417 |
zabbix_template check_template_changed error
|
##### SUMMARY
KeyError: 'templates' on second run
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /mnt/d/Code/gitlab/zabbix-servers/ansible/ansible.cfg
configured module search path = ['/home/fism/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /mnt/d/Code/gitlab/zabbix-servers/venv/lib/python3.6/site-packages/ansible
executable location = /mnt/d/Code/gitlab/zabbix-servers/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
On first run the new template is created
```yaml
- name: "create test template"
zabbix_template:
server_url: "{{ zabbix_apiserver_url }}"
login_user: "{{ zabbix_apiuser_rw }}"
login_password: "{{ zabbix_apipassword_rw }}"
state: present
template_name: test1
template_groups:
- "Templates"
```
If you run it a second time errors are thrown.
##### EXPECTED RESULTS
The second run should succeed without changes.
##### ACTUAL RESULTS
```shell
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py", line 743, in <module>
File "/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py", line 727, in main
File "/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py", line 398, in check_template_changed
KeyError: 'templates'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n F
ile \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\
n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_zabbix_template_payload_9t34ekm5/ansi
ble_zabbix_template_payload.zip/ansible/modules/zabbix_template.py\", line 743, in <module>\n File \"/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/
zabbix_template.py\", line 727, in main\n File \"/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py\", line 398, in check_template_ch
anged\nKeyError: 'templates'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
This is what the existing_template variable looks like
```json
{
'zabbix_export':
{
'version': '4.4',
'date': '2020-01-13T14: 47: 28Z',
'groups': [
{
'name': 'Templates'
}
],
'templates': [
{
'template': 'fismc',
'name': 'fismc',
'groups': [
{
'name': 'Templates'
}
],
'items': [
{
'name': 'asdf',
'key': 'asdf'
}
]
}
]
}
}
```
So it seams that it fails if there aren't other templates linked.
|
https://github.com/ansible/ansible/issues/66417
|
https://github.com/ansible/ansible/pull/66463
|
eb3d081c1188c10a560b919f48ef9b517a0df65d
|
e646bd08e1f77b0e2c535c1b3d577ef49df7a41a
| 2020-01-13T14:55:35Z |
python
| 2020-01-23T13:22:19Z |
changelogs/fragments/66463-zabbix_template-fix-error-linktemplate-and-importdump.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,417 |
zabbix_template check_template_changed error
|
##### SUMMARY
KeyError: 'templates' on second run
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /mnt/d/Code/gitlab/zabbix-servers/ansible/ansible.cfg
configured module search path = ['/home/fism/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /mnt/d/Code/gitlab/zabbix-servers/venv/lib/python3.6/site-packages/ansible
executable location = /mnt/d/Code/gitlab/zabbix-servers/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
On first run the new template is created
```yaml
- name: "create test template"
zabbix_template:
server_url: "{{ zabbix_apiserver_url }}"
login_user: "{{ zabbix_apiuser_rw }}"
login_password: "{{ zabbix_apipassword_rw }}"
state: present
template_name: test1
template_groups:
- "Templates"
```
If you run it a second time errors are thrown.
##### EXPECTED RESULTS
The second run should succeed without changes.
##### ACTUAL RESULTS
```shell
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py", line 743, in <module>
File "/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py", line 727, in main
File "/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py", line 398, in check_template_changed
KeyError: 'templates'
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n F
ile \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\
n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_zabbix_template_payload_9t34ekm5/ansi
ble_zabbix_template_payload.zip/ansible/modules/zabbix_template.py\", line 743, in <module>\n File \"/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/
zabbix_template.py\", line 727, in main\n File \"/tmp/ansible_zabbix_template_payload_9t34ekm5/ansible_zabbix_template_payload.zip/ansible/modules/zabbix_template.py\", line 398, in check_template_ch
anged\nKeyError: 'templates'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
This is what the existing_template variable looks like
```json
{
'zabbix_export':
{
'version': '4.4',
'date': '2020-01-13T14: 47: 28Z',
'groups': [
{
'name': 'Templates'
}
],
'templates': [
{
'template': 'fismc',
'name': 'fismc',
'groups': [
{
'name': 'Templates'
}
],
'items': [
{
'name': 'asdf',
'key': 'asdf'
}
]
}
]
}
}
```
So it seams that it fails if there aren't other templates linked.
|
https://github.com/ansible/ansible/issues/66417
|
https://github.com/ansible/ansible/pull/66463
|
eb3d081c1188c10a560b919f48ef9b517a0df65d
|
e646bd08e1f77b0e2c535c1b3d577ef49df7a41a
| 2020-01-13T14:55:35Z |
python
| 2020-01-23T13:22:19Z |
lib/ansible/modules/monitoring/zabbix/zabbix_template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, sookido
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: zabbix_template
short_description: Create/update/delete/dump Zabbix template
description:
- This module allows you to create, modify, delete and dump Zabbix templates.
- Multiple templates can be created or modified at once if passing JSON or XML to module.
version_added: "2.5"
author:
- "sookido (@sookido)"
- "Logan Vig (@logan2211)"
- "Dusan Matejka (@D3DeFi)"
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.4"
options:
template_name:
description:
- Name of Zabbix template.
- Required when I(template_json) or I(template_xml) are not used.
- Mutually exclusive with I(template_json) and I(template_xml).
required: false
type: str
template_json:
description:
- JSON dump of templates to import.
- Multiple templates can be imported this way.
- Mutually exclusive with I(template_name) and I(template_xml).
required: false
type: json
template_xml:
description:
- XML dump of templates to import.
- Multiple templates can be imported this way.
- You are advised to pass XML structure matching the structure used by your version of Zabbix server.
- Custom XML structure can be imported as long as it is valid, but may not yield consistent idempotent
results on subsequent runs.
- Mutually exclusive with I(template_name) and I(template_json).
required: false
version_added: '2.9'
type: str
template_groups:
description:
- List of host groups to add template to when template is created.
- Replaces the current host groups the template belongs to if the template is already present.
- Required when creating a new template with C(state=present) and I(template_name) is used.
Not required when updating an existing template.
required: false
type: list
elements: str
link_templates:
description:
- List of template names to be linked to the template.
- Templates that are not specified and are linked to the existing template will be only unlinked and not
cleared from the template.
required: false
type: list
elements: str
clear_templates:
description:
- List of template names to be unlinked and cleared from the template.
- This option is ignored if template is being created for the first time.
required: false
type: list
elements: str
macros:
description:
- List of user macros to create for the template.
- Macros that are not specified and are present on the existing template will be replaced.
- See examples on how to pass macros.
required: false
type: list
elements: dict
suboptions:
name:
description:
- Name of the macro.
- Must be specified in {$NAME} format.
type: str
value:
description:
- Value of the macro.
type: str
dump_format:
description:
- Format to use when dumping template with C(state=dump).
- This option is deprecated and will eventually be removed in 2.14.
required: false
choices: [json, xml]
default: "json"
version_added: '2.9'
type: str
state:
description:
- Required state of the template.
- On C(state=present) template will be created/imported or updated depending if it is already present.
- On C(state=dump) template content will get dumped into required format specified in I(dump_format).
- On C(state=absent) template will be deleted.
- The C(state=dump) is deprecated and will eventually be removed in 2.14. The M(zabbix_template_info) module should be used instead.
required: false
choices: [present, absent, dump]
default: "present"
type: str
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = r'''
---
- name: Create a new Zabbix template linked to groups, macros and templates
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: ExampleHost
template_groups:
- Role
- Role2
link_templates:
- Example template1
- Example template2
macros:
- macro: '{$EXAMPLE_MACRO1}'
value: 30000
- macro: '{$EXAMPLE_MACRO2}'
value: 3
- macro: '{$EXAMPLE_MACRO3}'
value: 'Example'
state: present
- name: Unlink and clear templates from the existing Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: ExampleHost
clear_templates:
- Example template3
- Example template4
state: present
- name: Import Zabbix templates from JSON
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_json: "{{ lookup('file', 'zabbix_apache2.json') }}"
state: present
- name: Import Zabbix templates from XML
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_xml: "{{ lookup('file', 'zabbix_apache2.json') }}"
state: present
- name: Import Zabbix template from Ansible dict variable
zabbix_template:
login_user: username
login_password: password
server_url: http://127.0.0.1
template_json:
zabbix_export:
version: '3.2'
templates:
- name: Template for Testing
description: 'Testing template import'
template: Test Template
groups:
- name: Templates
applications:
- name: Test Application
state: present
- name: Configure macros on the existing Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
macros:
- macro: '{$TEST_MACRO}'
value: 'Example'
state: present
- name: Delete Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
state: absent
- name: Dump Zabbix template as JSON
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
state: dump
register: template_dump
- name: Dump Zabbix template as XML
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
dump_format: xml
state: dump
register: template_dump
'''
RETURN = r'''
---
template_json:
description: The JSON dump of the template
returned: when state is dump
type: str
sample: {
"zabbix_export":{
"date":"2017-11-29T16:37:24Z",
"templates":[{
"templates":[],
"description":"",
"httptests":[],
"screens":[],
"applications":[],
"discovery_rules":[],
"groups":[{"name":"Templates"}],
"name":"Test Template",
"items":[],
"macros":[],
"template":"test"
}],
"version":"3.2",
"groups":[{
"name":"Templates"
}]
}
}
template_xml:
description: dump of the template in XML representation
returned: when state is dump and dump_format is xml
type: str
sample: |-
<?xml version="1.0" ?>
<zabbix_export>
<version>4.2</version>
<date>2019-07-12T13:37:26Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>test</template>
<name>Test Template</name>
<description/>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<applications/>
<items/>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
<tags/>
</template>
</templates>
</zabbix_export>
'''
import atexit
import json
import traceback
import xml.etree.ElementTree as ET
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
from zabbix_api import ZabbixAPI, ZabbixAPIException
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
class Template(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" %
group_name)
return True
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
group_ids = []
if group_names is None or len(group_names) == 0:
return group_ids
if self.check_host_group_exist(group_names):
group_list = self._zapi.hostgroup.get(
{'output': 'extend',
'filter': {'name': group_names}})
for group in group_list:
group_id = group['groupid']
group_ids.append({'groupid': group_id})
return group_ids
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get(
{'output': 'extend',
'filter': {'host': template}})
if len(template_list) < 1:
continue
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_template(self, template_name, group_ids, link_template_ids, macros):
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.template.create({'host': template_name, 'groups': group_ids, 'templates': link_template_ids,
'macros': macros})
def check_template_changed(self, template_ids, template_groups, link_templates, clear_templates,
template_macros, template_content, template_type):
"""Compares template parameters to already existing values if any are found.
template_json - JSON structures are compared as deep sorted dictionaries,
template_xml - XML structures are compared as strings, but filtered and formatted first,
If none above is used, all the other arguments are compared to their existing counterparts
retrieved from Zabbix API."""
changed = False
# Compare filtered and formatted XMLs strings for any changes. It is expected that provided
# XML has same structure as Zabbix uses (e.g. it was optimally exported via Zabbix GUI or API)
if template_content is not None and template_type == 'xml':
existing_template = self.dump_template(template_ids, template_type='xml')
if self.filter_xml_template(template_content) != self.filter_xml_template(existing_template):
changed = True
return changed
existing_template = self.dump_template(template_ids, template_type='json')
# Compare JSON objects as deep sorted python dictionaries
if template_content is not None and template_type == 'json':
parsed_template_json = self.load_json_template(template_content)
if self.diff_template(parsed_template_json, existing_template):
changed = True
return changed
# If neither template_json or template_xml were used, user provided all parameters via module options
if template_groups is not None:
existing_groups = [g['name'] for g in existing_template['zabbix_export']['groups']]
if set(template_groups) != set(existing_groups):
changed = True
# Check if any new templates would be linked or any existing would be unlinked
exist_child_templates = [t['name'] for t in existing_template['zabbix_export']['templates'][0]['templates']]
if link_templates is not None:
if set(link_templates) != set(exist_child_templates):
changed = True
# Mark that there will be changes when at least one existing template will be unlinked
if clear_templates is not None:
for t in clear_templates:
if t in exist_child_templates:
changed = True
break
if template_macros is not None:
existing_macros = existing_template['zabbix_export']['templates'][0]['macros']
if template_macros != existing_macros:
changed = True
return changed
def update_template(self, template_ids, group_ids, link_template_ids, clear_template_ids, template_macros):
template_changes = {}
if group_ids is not None:
template_changes.update({'groups': group_ids})
if link_template_ids is not None:
template_changes.update({'templates': link_template_ids})
if clear_template_ids is not None:
template_changes.update({'templates_clear': clear_template_ids})
if template_macros is not None:
template_changes.update({'macros': template_macros})
if template_changes:
# If we got here we know that only one template was provided via template_name
template_changes.update({'templateid': template_ids[0]})
self._zapi.template.update(template_changes)
def delete_template(self, templateids):
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.template.delete(templateids)
def ordered_json(self, obj):
# Deep sort json dicts for comparison
if isinstance(obj, dict):
return sorted((k, self.ordered_json(v)) for k, v in obj.items())
if isinstance(obj, list):
return sorted(self.ordered_json(x) for x in obj)
else:
return obj
def dump_template(self, template_ids, template_type='json'):
if self._module.check_mode:
self._module.exit_json(changed=True)
try:
dump = self._zapi.configuration.export({'format': template_type, 'options': {'templates': template_ids}})
if template_type == 'xml':
return str(ET.tostring(ET.fromstring(dump.encode('utf-8')), encoding='utf-8').decode('utf-8'))
else:
return self.load_json_template(dump)
except ZabbixAPIException as e:
self._module.fail_json(msg='Unable to export template: %s' % e)
def diff_template(self, template_json_a, template_json_b):
# Compare 2 zabbix templates and return True if they differ.
template_json_a = self.filter_template(template_json_a)
template_json_b = self.filter_template(template_json_b)
if self.ordered_json(template_json_a) == self.ordered_json(template_json_b):
return False
return True
def filter_template(self, template_json):
# Filter the template json to contain only the keys we will update
keep_keys = set(['graphs', 'templates', 'triggers', 'value_maps'])
unwanted_keys = set(template_json['zabbix_export']) - keep_keys
for unwanted_key in unwanted_keys:
del template_json['zabbix_export'][unwanted_key]
# Versions older than 2.4 do not support description field within template
desc_not_supported = False
if LooseVersion(self._zapi.api_version()).version[:2] < LooseVersion('2.4').version:
desc_not_supported = True
# Filter empty attributes from template object to allow accurate comparison
for template in template_json['zabbix_export']['templates']:
for key in list(template.keys()):
if not template[key] or (key == 'description' and desc_not_supported):
template.pop(key)
return template_json
def filter_xml_template(self, template_xml):
"""Filters out keys from XML template that may wary between exports (e.g date or version) and
keys that are not imported via this module.
It is advised that provided XML template exactly matches XML structure used by Zabbix"""
# Strip last new line and convert string to ElementTree
parsed_xml_root = self.load_xml_template(template_xml.strip())
keep_keys = ['graphs', 'templates', 'triggers', 'value_maps']
# Remove unwanted XML nodes
for node in list(parsed_xml_root):
if node.tag not in keep_keys:
parsed_xml_root.remove(node)
# Filter empty attributes from template objects to allow accurate comparison
for template in list(parsed_xml_root.find('templates')):
for element in list(template):
if element.text is None and len(list(element)) == 0:
template.remove(element)
# Filter new lines and indentation
xml_root_text = list(line.strip() for line in ET.tostring(parsed_xml_root).split('\n'))
return ''.join(xml_root_text)
def load_json_template(self, template_json):
try:
return json.loads(template_json)
except ValueError as e:
self._module.fail_json(msg='Invalid JSON provided', details=to_native(e), exception=traceback.format_exc())
def load_xml_template(self, template_xml):
try:
return ET.fromstring(template_xml)
except ET.ParseError as e:
self._module.fail_json(msg='Invalid XML provided', details=to_native(e), exception=traceback.format_exc())
def import_template(self, template_content, template_type='json'):
# rules schema latest version
update_rules = {
'applications': {
'createMissing': True,
'deleteMissing': True
},
'discoveryRules': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'graphs': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'groups': {
'createMissing': True
},
'httptests': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'items': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'templates': {
'createMissing': True,
'updateExisting': True
},
'templateLinkage': {
'createMissing': True
},
'templateScreens': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'triggers': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'valueMaps': {
'createMissing': True,
'updateExisting': True
}
}
try:
# old api version support here
api_version = self._zapi.api_version()
# updateExisting for application removed from zabbix api after 3.2
if LooseVersion(api_version).version[:2] <= LooseVersion('3.2').version:
update_rules['applications']['updateExisting'] = True
import_data = {'format': template_type, 'source': template_content, 'rules': update_rules}
self._zapi.configuration.import_(import_data)
except ZabbixAPIException as e:
self._module.fail_json(msg='Unable to import template', details=to_native(e),
exception=traceback.format_exc())
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
template_name=dict(type='str', required=False),
template_json=dict(type='json', required=False),
template_xml=dict(type='str', required=False),
template_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
clear_templates=dict(type='list', required=False),
macros=dict(type='list', required=False),
dump_format=dict(type='str', required=False, default='json', choices=['json', 'xml']),
state=dict(type='str', default="present", choices=['present', 'absent', 'dump']),
timeout=dict(type='int', default=10)
),
required_one_of=[
['template_name', 'template_json', 'template_xml']
],
mutually_exclusive=[
['template_name', 'template_json', 'template_xml']
],
required_if=[
['state', 'absent', ['template_name']],
['state', 'dump', ['template_name']]
],
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
template_name = module.params['template_name']
template_json = module.params['template_json']
template_xml = module.params['template_xml']
template_groups = module.params['template_groups']
link_templates = module.params['link_templates']
clear_templates = module.params['clear_templates']
template_macros = module.params['macros']
dump_format = module.params['dump_format']
state = module.params['state']
timeout = module.params['timeout']
zbx = None
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except ZabbixAPIException as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
template = Template(module, zbx)
# Identify template names for IDs retrieval
# Template names are expected to reside in ['zabbix_export']['templates'][*]['template'] for both data types
template_content, template_type = None, None
if template_json is not None:
template_type = 'json'
template_content = template_json
json_parsed = template.load_json_template(template_content)
template_names = list(t['template'] for t in json_parsed['zabbix_export']['templates'])
elif template_xml is not None:
template_type = 'xml'
template_content = template_xml
xml_parsed = template.load_xml_template(template_content)
template_names = list(t.find('template').text for t in list(xml_parsed.find('templates')))
else:
template_names = [template_name]
template_ids = template.get_template_ids(template_names)
if state == "absent":
if not template_ids:
module.exit_json(changed=False, msg="Template not found. No changed: %s" % template_name)
template.delete_template(template_ids)
module.exit_json(changed=True, result="Successfully deleted template %s" % template_name)
elif state == "dump":
module.deprecate("The 'dump' state has been deprecated and will be removed, use 'zabbix_template_info' module instead.", version='2.14')
if not template_ids:
module.fail_json(msg='Template not found: %s' % template_name)
if dump_format == 'json':
module.exit_json(changed=False, template_json=template.dump_template(template_ids, template_type='json'))
elif dump_format == 'xml':
module.exit_json(changed=False, template_xml=template.dump_template(template_ids, template_type='xml'))
elif state == "present":
# Load all subelements for template that were provided by user
group_ids = None
if template_groups is not None:
group_ids = template.get_group_ids_by_group_names(template_groups)
link_template_ids = None
if link_templates is not None:
link_template_ids = template.get_template_ids(link_templates)
clear_template_ids = None
if clear_templates is not None:
clear_template_ids = template.get_template_ids(clear_templates)
if template_macros is not None:
# Zabbix configuration.export does not differentiate python types (numbers are returned as strings)
for macroitem in template_macros:
for key in macroitem:
macroitem[key] = str(macroitem[key])
if not template_ids:
# Assume new templates are being added when no ID's were found
if template_content is not None:
template.import_template(template_content, template_type)
module.exit_json(changed=True, result="Template import successful")
else:
if group_ids is None:
module.fail_json(msg='template_groups are required when creating a new Zabbix template')
template.add_template(template_name, group_ids, link_template_ids, template_macros)
module.exit_json(changed=True, result="Successfully added template: %s" % template_name)
else:
changed = template.check_template_changed(template_ids, template_groups, link_templates, clear_templates,
template_macros, template_content, template_type)
if module.check_mode:
module.exit_json(changed=changed)
if changed:
if template_type is not None:
template.import_template(template_content, template_type)
else:
template.update_template(template_ids, group_ids, link_template_ids, clear_template_ids,
template_macros)
module.exit_json(changed=changed, result="Template successfully updated")
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,497 |
pamd module: cannot remove line when it is the first line of the file
|
##### SUMMARY
I have a pam.d file which has no comments and only pam directives in it. I was attempting to remove the very first one, which is also the first line of the file. This results in an error. By adding a dummy comment to the file as the first line, I was able to successfully remove the line I needed.
My guess is that because the [append](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/pamd.py#L464) method does not set the `prev` and `next` linked-list properties when the line being append is the first line, the exception occurs. In the case of `if self._head is None`, `prev` and `next` should be set to `None`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pamd
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = None
configured module search path = ['/home/administrator/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venvs/ansible/lib/python3.5/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
administrator@administrator-PC:~$ ansible-config dump --only-changed
administrator@administrator-PC:~$
```
##### OS / ENVIRONMENT
Deepin 15.11, using the `ansible-playbook` command to run a local playbook
##### STEPS TO REPRODUCE
Example pam.d file that does not work (`deepin-auth-keyboard`):
```
auth [success=2 default=ignore] pam_lsass.so
auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass
auth requisite pam_deny.so
auth required pam_permit.so
```
Playbook task:
```yaml
- name: Remove lsass from Deepin pam.d
pamd:
name: deepin-auth-keyboard
type: auth
control: "[success=2 default=ignore]"
module_path: pam_lsass.so
state: absent
```
##### EXPECTED RESULTS
I expect that the first line of the file is removed
##### ACTUAL RESULTS
```
TASK [kerberos : Remove lsass from Deepin pam.d] ****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'PamdRule' object has no attribute 'prev'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 114, in <module>\n _ansiballz_main()\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/opt/venvs/ansible/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/opt/venvs/ansible/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 673, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 877, in <module>\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 843, in main\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 479, in remove\nAttributeError: 'PamdRule' object has no attribute 'prev'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/60497
|
https://github.com/ansible/ansible/pull/66398
|
e368f788f71c338cd3f049d5d6bdc643a51c0514
|
a4b59d021368285490f7cda50c11ac4f7a8030b5
| 2019-08-13T15:12:15Z |
python
| 2020-01-23T19:08:42Z |
changelogs/fragments/66398-pamd_fix-attributeerror-when-removing-first-line.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,497 |
pamd module: cannot remove line when it is the first line of the file
|
##### SUMMARY
I have a pam.d file which has no comments and only pam directives in it. I was attempting to remove the very first one, which is also the first line of the file. This results in an error. By adding a dummy comment to the file as the first line, I was able to successfully remove the line I needed.
My guess is that because the [append](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/pamd.py#L464) method does not set the `prev` and `next` linked-list properties when the line being append is the first line, the exception occurs. In the case of `if self._head is None`, `prev` and `next` should be set to `None`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pamd
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = None
configured module search path = ['/home/administrator/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venvs/ansible/lib/python3.5/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
administrator@administrator-PC:~$ ansible-config dump --only-changed
administrator@administrator-PC:~$
```
##### OS / ENVIRONMENT
Deepin 15.11, using the `ansible-playbook` command to run a local playbook
##### STEPS TO REPRODUCE
Example pam.d file that does not work (`deepin-auth-keyboard`):
```
auth [success=2 default=ignore] pam_lsass.so
auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass
auth requisite pam_deny.so
auth required pam_permit.so
```
Playbook task:
```yaml
- name: Remove lsass from Deepin pam.d
pamd:
name: deepin-auth-keyboard
type: auth
control: "[success=2 default=ignore]"
module_path: pam_lsass.so
state: absent
```
##### EXPECTED RESULTS
I expect that the first line of the file is removed
##### ACTUAL RESULTS
```
TASK [kerberos : Remove lsass from Deepin pam.d] ****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'PamdRule' object has no attribute 'prev'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 114, in <module>\n _ansiballz_main()\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/opt/venvs/ansible/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/opt/venvs/ansible/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 673, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 877, in <module>\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 843, in main\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 479, in remove\nAttributeError: 'PamdRule' object has no attribute 'prev'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/60497
|
https://github.com/ansible/ansible/pull/66398
|
e368f788f71c338cd3f049d5d6bdc643a51c0514
|
a4b59d021368285490f7cda50c11ac4f7a8030b5
| 2019-08-13T15:12:15Z |
python
| 2020-01-23T19:08:42Z |
lib/ansible/modules/system/pamd.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Kenneth D. Evensen <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
module: pamd
author:
- Kenneth D. Evensen (@kevensen)
short_description: Manage PAM Modules
description:
- Edit PAM service's type, control, module path and module arguments.
- In order for a PAM rule to be modified, the type, control and
module_path must match an existing rule. See man(5) pam.d for details.
version_added: "2.3"
options:
name:
description:
- The name generally refers to the PAM service file to
change, for example system-auth.
type: str
required: true
type:
description:
- The type of the PAM rule being modified.
- The C(type), C(control) and C(module_path) all must match a rule to be modified.
type: str
required: true
choices: [ account, -account, auth, -auth, password, -password, session, -session ]
control:
description:
- The control of the PAM rule being modified.
- This may be a complicated control with brackets. If this is the case, be
sure to put "[bracketed controls]" in quotes.
- The C(type), C(control) and C(module_path) all must match a rule to be modified.
type: str
required: true
module_path:
description:
- The module path of the PAM rule being modified.
- The C(type), C(control) and C(module_path) all must match a rule to be modified.
type: str
required: true
new_type:
description:
- The new type to assign to the new rule.
type: str
choices: [ account, -account, auth, -auth, password, -password, session, -session ]
new_control:
description:
- The new control to assign to the new rule.
type: str
new_module_path:
description:
- The new module path to be assigned to the new rule.
type: str
module_arguments:
description:
- When state is C(updated), the module_arguments will replace existing module_arguments.
- When state is C(args_absent) args matching those listed in module_arguments will be removed.
- When state is C(args_present) any args listed in module_arguments are added if
missing from the existing rule.
- Furthermore, if the module argument takes a value denoted by C(=),
the value will be changed to that specified in module_arguments.
type: list
state:
description:
- The default of C(updated) will modify an existing rule if type,
control and module_path all match an existing rule.
- With C(before), the new rule will be inserted before a rule matching type,
control and module_path.
- Similarly, with C(after), the new rule will be inserted after an existing rulematching type,
control and module_path.
- With either C(before) or C(after) new_type, new_control, and new_module_path must all be specified.
- If state is C(args_absent) or C(args_present), new_type, new_control, and new_module_path will be ignored.
- State C(absent) will remove the rule. The 'absent' state was added in Ansible 2.4.
type: str
choices: [ absent, before, after, args_absent, args_present, updated ]
default: updated
path:
description:
- This is the path to the PAM service files.
type: path
default: /etc/pam.d
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.6'
'''
EXAMPLES = r'''
- name: Update pamd rule's control in /etc/pam.d/system-auth
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
new_control: sufficient
- name: Update pamd rule's complex control in /etc/pam.d/system-auth
pamd:
name: system-auth
type: session
control: '[success=1 default=ignore]'
module_path: pam_succeed_if.so
new_control: '[success=2 default=ignore]'
- name: Insert a new rule before an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
new_type: auth
new_control: sufficient
new_module_path: pam_faillock.so
state: before
- name: Insert a new rule pam_wheel.so with argument 'use_uid' after an \
existing rule pam_rootok.so
pamd:
name: su
type: auth
control: sufficient
module_path: pam_rootok.so
new_type: auth
new_control: required
new_module_path: pam_wheel.so
module_arguments: 'use_uid'
state: after
- name: Remove module arguments from an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
module_arguments: ''
state: updated
- name: Replace all module arguments in an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
module_arguments: 'preauth
silent
deny=3
unlock_time=604800
fail_interval=900'
state: updated
- name: Remove specific arguments from a rule
pamd:
name: system-auth
type: session
control: '[success=1 default=ignore]'
module_path: pam_succeed_if.so
module_arguments: crond,quiet
state: args_absent
- name: Ensure specific arguments are present in a rule
pamd:
name: system-auth
type: session
control: '[success=1 default=ignore]'
module_path: pam_succeed_if.so
module_arguments: crond,quiet
state: args_present
- name: Ensure specific arguments are present in a rule (alternative)
pamd:
name: system-auth
type: session
control: '[success=1 default=ignore]'
module_path: pam_succeed_if.so
module_arguments:
- crond
- quiet
state: args_present
- name: Module arguments requiring commas must be listed as a Yaml list
pamd:
name: special-module
type: account
control: required
module_path: pam_access.so
module_arguments:
- listsep=,
state: args_present
- name: Update specific argument value in a rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
module_arguments: 'fail_interval=300'
state: args_present
- name: Add pam common-auth rule for duo
pamd:
name: common-auth
new_type: auth
new_control: '[success=1 default=ignore]'
new_module_path: '/lib64/security/pam_duo.so'
state: after
type: auth
module_path: pam_sss.so
control: 'requisite'
'''
RETURN = r'''
change_count:
description: How many rules were changed.
type: int
sample: 1
returned: success
version_added: 2.4
new_rule:
description: The changes to the rule. This was available in Ansible 2.4 and Ansible 2.5. It was removed in Ansible 2.6.
type: str
sample: None None None sha512 shadow try_first_pass use_authtok
returned: success
version_added: 2.4
updated_rule_(n):
description: The rule(s) that was/were changed. This is only available in
Ansible 2.4 and was removed in Ansible 2.5.
type: str
sample:
- password sufficient pam_unix.so sha512 shadow try_first_pass
use_authtok
returned: success
version_added: 2.4
action:
description:
- "That action that was taken and is one of: update_rule,
insert_before_rule, insert_after_rule, args_present, args_absent,
absent. This was available in Ansible 2.4 and removed in Ansible 2.8"
returned: always
type: str
sample: "update_rule"
version_added: 2.4
dest:
description:
- "Path to pam.d service that was changed. This is only available in
Ansible 2.3 and was removed in Ansible 2.4."
returned: success
type: str
sample: "/etc/pam.d/system-auth"
backupdest:
description:
- "The file name of the backup file, if created."
returned: success
type: str
version_added: 2.6
...
'''
from ansible.module_utils.basic import AnsibleModule
import os
import re
from tempfile import NamedTemporaryFile
from datetime import datetime
RULE_REGEX = re.compile(r"""(?P<rule_type>-?(?:auth|account|session|password))\s+
(?P<control>\[.*\]|\S*)\s+
(?P<path>\S*)\s*
(?P<args>.*)\s*""", re.X)
RULE_ARG_REGEX = re.compile(r"""(\[.*\]|\S*)""")
VALID_TYPES = ['account', '-account', 'auth', '-auth', 'password', '-password', 'session', '-session']
class PamdLine(object):
def __init__(self, line):
self.line = line
self.prev = None
self.next = None
@property
def is_valid(self):
if self.line == '':
return True
return False
def validate(self):
if not self.is_valid:
return False, "Rule is not valid " + self.line
return True, "Rule is valid " + self.line
# Method to check if a rule matches the type, control and path.
def matches(self, rule_type, rule_control, rule_path, rule_args=None):
return False
def __str__(self):
return str(self.line)
class PamdComment(PamdLine):
def __init__(self, line):
super(PamdComment, self).__init__(line)
@property
def is_valid(self):
if self.line.startswith('#'):
return True
return False
class PamdInclude(PamdLine):
def __init__(self, line):
super(PamdInclude, self).__init__(line)
@property
def is_valid(self):
if self.line.startswith('@include'):
return True
return False
class PamdRule(PamdLine):
valid_simple_controls = ['required', 'requisite', 'sufficient', 'optional', 'include', 'substack', 'definitive']
valid_control_values = ['success', 'open_err', 'symbol_err', 'service_err', 'system_err', 'buf_err',
'perm_denied', 'auth_err', 'cred_insufficient', 'authinfo_unavail', 'user_unknown',
'maxtries', 'new_authtok_reqd', 'acct_expired', 'session_err', 'cred_unavail',
'cred_expired', 'cred_err', 'no_module_data', 'conv_err', 'authtok_err',
'authtok_recover_err', 'authtok_lock_busy', 'authtok_disable_aging', 'try_again',
'ignore', 'abort', 'authtok_expired', 'module_unknown', 'bad_item', 'conv_again',
'incomplete', 'default']
valid_control_actions = ['ignore', 'bad', 'die', 'ok', 'done', 'reset']
def __init__(self, rule_type, rule_control, rule_path, rule_args=None):
self._control = None
self._args = None
self.rule_type = rule_type
self.rule_control = rule_control
self.rule_path = rule_path
self.rule_args = rule_args
# Method to check if a rule matches the type, control and path.
def matches(self, rule_type, rule_control, rule_path, rule_args=None):
if (rule_type == self.rule_type and
rule_control == self.rule_control and
rule_path == self.rule_path):
return True
return False
@classmethod
def rule_from_string(cls, line):
rule_match = RULE_REGEX.search(line)
rule_args = parse_module_arguments(rule_match.group('args'))
return cls(rule_match.group('rule_type'), rule_match.group('control'), rule_match.group('path'), rule_args)
def __str__(self):
if self.rule_args:
return '{0: <11}{1} {2} {3}'.format(self.rule_type, self.rule_control, self.rule_path, ' '.join(self.rule_args))
return '{0: <11}{1} {2}'.format(self.rule_type, self.rule_control, self.rule_path)
@property
def rule_control(self):
if isinstance(self._control, list):
return '[' + ' '.join(self._control) + ']'
return self._control
@rule_control.setter
def rule_control(self, control):
if control.startswith('['):
control = control.replace(' = ', '=').replace('[', '').replace(']', '')
self._control = control.split(' ')
else:
self._control = control
@property
def rule_args(self):
if not self._args:
return []
return self._args
@rule_args.setter
def rule_args(self, args):
self._args = parse_module_arguments(args)
@property
def line(self):
return str(self)
@classmethod
def is_action_unsigned_int(cls, string_num):
number = 0
try:
number = int(string_num)
except ValueError:
return False
if number >= 0:
return True
return False
@property
def is_valid(self):
return self.validate()[0]
def validate(self):
# Validate the rule type
if self.rule_type not in VALID_TYPES:
return False, "Rule type, " + self.rule_type + ", is not valid in rule " + self.line
# Validate the rule control
if isinstance(self._control, str) and self.rule_control not in PamdRule.valid_simple_controls:
return False, "Rule control, " + self.rule_control + ", is not valid in rule " + self.line
elif isinstance(self._control, list):
for control in self._control:
value, action = control.split("=")
if value not in PamdRule.valid_control_values:
return False, "Rule control value, " + value + ", is not valid in rule " + self.line
if action not in PamdRule.valid_control_actions and not PamdRule.is_action_unsigned_int(action):
return False, "Rule control action, " + action + ", is not valid in rule " + self.line
# TODO: Validate path
return True, "Rule is valid " + self.line
# PamdService encapsulates an entire service and contains one or more rules. It seems the best way is to do this
# as a doubly linked list.
class PamdService(object):
def __init__(self, content):
self._head = None
self._tail = None
for line in content.splitlines():
if line.lstrip().startswith('#'):
pamd_line = PamdComment(line)
elif line.lstrip().startswith('@include'):
pamd_line = PamdInclude(line)
elif line == '':
pamd_line = PamdLine(line)
else:
pamd_line = PamdRule.rule_from_string(line)
self.append(pamd_line)
def append(self, pamd_line):
if self._head is None:
self._head = self._tail = pamd_line
else:
pamd_line.prev = self._tail
pamd_line.next = None
self._tail.next = pamd_line
self._tail = pamd_line
def remove(self, rule_type, rule_control, rule_path):
current_line = self._head
changed = 0
while current_line is not None:
if current_line.matches(rule_type, rule_control, rule_path):
if current_line.prev is not None:
current_line.prev.next = current_line.next
current_line.next.prev = current_line.prev
else:
self._head = current_line.next
current_line.next.prev = None
changed += 1
current_line = current_line.next
return changed
def get(self, rule_type, rule_control, rule_path):
lines = []
current_line = self._head
while current_line is not None:
if isinstance(current_line, PamdRule) and current_line.matches(rule_type, rule_control, rule_path):
lines.append(current_line)
current_line = current_line.next
return lines
def has_rule(self, rule_type, rule_control, rule_path):
if self.get(rule_type, rule_control, rule_path):
return True
return False
def update_rule(self, rule_type, rule_control, rule_path,
new_type=None, new_control=None, new_path=None, new_args=None):
# Get a list of rules we want to change
rules_to_find = self.get(rule_type, rule_control, rule_path)
new_args = parse_module_arguments(new_args)
changes = 0
for current_rule in rules_to_find:
rule_changed = False
if new_type:
if(current_rule.rule_type != new_type):
rule_changed = True
current_rule.rule_type = new_type
if new_control:
if(current_rule.rule_control != new_control):
rule_changed = True
current_rule.rule_control = new_control
if new_path:
if(current_rule.rule_path != new_path):
rule_changed = True
current_rule.rule_path = new_path
if new_args:
if(current_rule.rule_args != new_args):
rule_changed = True
current_rule.rule_args = new_args
if rule_changed:
changes += 1
return changes
def insert_before(self, rule_type, rule_control, rule_path,
new_type=None, new_control=None, new_path=None, new_args=None):
# Get a list of rules we want to change
rules_to_find = self.get(rule_type, rule_control, rule_path)
changes = 0
# There are two cases to consider.
# 1. The new rule doesn't exist before the existing rule
# 2. The new rule exists
for current_rule in rules_to_find:
# Create a new rule
new_rule = PamdRule(new_type, new_control, new_path, new_args)
# First we'll get the previous rule.
previous_rule = current_rule.prev
# Next we may have to loop backwards if the previous line is a comment. If it
# is, we'll get the previous "rule's" previous.
while previous_rule is not None and isinstance(previous_rule, PamdComment):
previous_rule = previous_rule.prev
# Next we'll see if the previous rule matches what we are trying to insert.
if previous_rule is not None and not previous_rule.matches(new_type, new_control, new_path):
# First set the original previous rule's next to the new_rule
previous_rule.next = new_rule
# Second, set the new_rule's previous to the original previous
new_rule.prev = previous_rule
# Third, set the new rule's next to the current rule
new_rule.next = current_rule
# Fourth, set the current rule's previous to the new_rule
current_rule.prev = new_rule
changes += 1
# Handle the case where it is the first rule in the list.
elif previous_rule is None:
# This is the case where the current rule is not only the first rule
# but the first line as well. So we set the head to the new rule
if current_rule.prev is None:
self._head = new_rule
# This case would occur if the previous line was a comment.
else:
current_rule.prev.next = new_rule
new_rule.prev = current_rule.prev
new_rule.next = current_rule
current_rule.prev = new_rule
changes += 1
return changes
def insert_after(self, rule_type, rule_control, rule_path,
new_type=None, new_control=None, new_path=None, new_args=None):
# Get a list of rules we want to change
rules_to_find = self.get(rule_type, rule_control, rule_path)
changes = 0
# There are two cases to consider.
# 1. The new rule doesn't exist after the existing rule
# 2. The new rule exists
for current_rule in rules_to_find:
# First we'll get the next rule.
next_rule = current_rule.next
# Next we may have to loop forwards if the next line is a comment. If it
# is, we'll get the next "rule's" next.
while next_rule is not None and isinstance(next_rule, PamdComment):
next_rule = next_rule.next
# First we create a new rule
new_rule = PamdRule(new_type, new_control, new_path, new_args)
if next_rule is not None and not next_rule.matches(new_type, new_control, new_path):
# If the previous rule doesn't match we'll insert our new rule.
# Second set the original next rule's previous to the new_rule
next_rule.prev = new_rule
# Third, set the new_rule's next to the original next rule
new_rule.next = next_rule
# Fourth, set the new rule's previous to the current rule
new_rule.prev = current_rule
# Fifth, set the current rule's next to the new_rule
current_rule.next = new_rule
changes += 1
# This is the case where the current_rule is the last in the list
elif next_rule is None:
new_rule.prev = self._tail
new_rule.next = None
self._tail.next = new_rule
self._tail = new_rule
current_rule.next = new_rule
changes += 1
return changes
def add_module_arguments(self, rule_type, rule_control, rule_path, args_to_add):
# Get a list of rules we want to change
rules_to_find = self.get(rule_type, rule_control, rule_path)
args_to_add = parse_module_arguments(args_to_add)
changes = 0
for current_rule in rules_to_find:
rule_changed = False
# create some structures to evaluate the situation
simple_new_args = set()
key_value_new_args = dict()
for arg in args_to_add:
if arg.startswith("["):
continue
elif "=" in arg:
key, value = arg.split("=")
key_value_new_args[key] = value
else:
simple_new_args.add(arg)
key_value_new_args_set = set(key_value_new_args)
simple_current_args = set()
key_value_current_args = dict()
for arg in current_rule.rule_args:
if arg.startswith("["):
continue
elif "=" in arg:
key, value = arg.split("=")
key_value_current_args[key] = value
else:
simple_current_args.add(arg)
key_value_current_args_set = set(key_value_current_args)
new_args_to_add = list()
# Handle new simple arguments
if simple_new_args.difference(simple_current_args):
for arg in simple_new_args.difference(simple_current_args):
new_args_to_add.append(arg)
# Handle new key value arguments
if key_value_new_args_set.difference(key_value_current_args_set):
for key in key_value_new_args_set.difference(key_value_current_args_set):
new_args_to_add.append(key + '=' + key_value_new_args[key])
if new_args_to_add:
current_rule.rule_args += new_args_to_add
rule_changed = True
# Handle existing key value arguments when value is not equal
if key_value_new_args_set.intersection(key_value_current_args_set):
for key in key_value_new_args_set.intersection(key_value_current_args_set):
if key_value_current_args[key] != key_value_new_args[key]:
arg_index = current_rule.rule_args.index(key + '=' + key_value_current_args[key])
current_rule.rule_args[arg_index] = str(key + '=' + key_value_new_args[key])
rule_changed = True
if rule_changed:
changes += 1
return changes
def remove_module_arguments(self, rule_type, rule_control, rule_path, args_to_remove):
# Get a list of rules we want to change
rules_to_find = self.get(rule_type, rule_control, rule_path)
args_to_remove = parse_module_arguments(args_to_remove)
changes = 0
for current_rule in rules_to_find:
if not args_to_remove:
args_to_remove = []
# Let's check to see if there are any args to remove by finding the intersection
# of the rule's current args and the args_to_remove lists
if not list(set(current_rule.rule_args) & set(args_to_remove)):
continue
# There are args to remove, so we create a list of new_args absent the args
# to remove.
current_rule.rule_args = [arg for arg in current_rule.rule_args if arg not in args_to_remove]
changes += 1
return changes
def validate(self):
current_line = self._head
while current_line is not None:
if not current_line.validate()[0]:
return current_line.validate()
current_line = current_line.next
return True, "Module is valid"
def __str__(self):
lines = []
current_line = self._head
while current_line is not None:
lines.append(str(current_line))
current_line = current_line.next
if lines[1].startswith("# Updated by Ansible"):
lines.pop(1)
lines.insert(1, "# Updated by Ansible - " + datetime.now().isoformat())
return '\n'.join(lines) + '\n'
def parse_module_arguments(module_arguments):
# Return empty list if we have no args to parse
if not module_arguments:
return []
elif isinstance(module_arguments, list) and len(module_arguments) == 1 and not module_arguments[0]:
return []
if not isinstance(module_arguments, list):
module_arguments = [module_arguments]
parsed_args = list()
for arg in module_arguments:
for item in filter(None, RULE_ARG_REGEX.findall(arg)):
if not item.startswith("["):
re.sub("\\s*=\\s*", "=", item)
parsed_args.append(item)
return parsed_args
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
type=dict(type='str', required=True, choices=VALID_TYPES),
control=dict(type='str', required=True),
module_path=dict(type='str', required=True),
new_type=dict(type='str', choices=VALID_TYPES),
new_control=dict(type='str'),
new_module_path=dict(type='str'),
module_arguments=dict(type='list'),
state=dict(type='str', default='updated', choices=['absent', 'after', 'args_absent', 'args_present', 'before', 'updated']),
path=dict(type='path', default='/etc/pam.d'),
backup=dict(type='bool', default=False),
),
supports_check_mode=True,
required_if=[
("state", "args_present", ["module_arguments"]),
("state", "args_absent", ["module_arguments"]),
("state", "before", ["new_control"]),
("state", "before", ["new_type"]),
("state", "before", ["new_module_path"]),
("state", "after", ["new_control"]),
("state", "after", ["new_type"]),
("state", "after", ["new_module_path"]),
],
)
content = str()
fname = os.path.join(module.params["path"], module.params["name"])
# Open the file and read the content or fail
try:
with open(fname, 'r') as service_file_obj:
content = service_file_obj.read()
except IOError as e:
# If unable to read the file, fail out
module.fail_json(msg='Unable to open/read PAM module \
file %s with error %s.' %
(fname, str(e)))
# Assuming we didn't fail, create the service
service = PamdService(content)
# Set the action
action = module.params['state']
changes = 0
# Take action
if action == 'updated':
changes = service.update_rule(module.params['type'], module.params['control'], module.params['module_path'],
module.params['new_type'], module.params['new_control'], module.params['new_module_path'],
module.params['module_arguments'])
elif action == 'before':
changes = service.insert_before(module.params['type'], module.params['control'], module.params['module_path'],
module.params['new_type'], module.params['new_control'], module.params['new_module_path'],
module.params['module_arguments'])
elif action == 'after':
changes = service.insert_after(module.params['type'], module.params['control'], module.params['module_path'],
module.params['new_type'], module.params['new_control'], module.params['new_module_path'],
module.params['module_arguments'])
elif action == 'args_absent':
changes = service.remove_module_arguments(module.params['type'], module.params['control'], module.params['module_path'],
module.params['module_arguments'])
elif action == 'args_present':
if [arg for arg in parse_module_arguments(module.params['module_arguments']) if arg.startswith("[")]:
module.fail_json(msg="Unable to process bracketed '[' complex arguments with 'args_present'. Please use 'updated'.")
changes = service.add_module_arguments(module.params['type'], module.params['control'], module.params['module_path'],
module.params['module_arguments'])
elif action == 'absent':
changes = service.remove(module.params['type'], module.params['control'], module.params['module_path'])
valid, msg = service.validate()
# If the module is not valid (meaning one of the rules is invalid), we will fail
if not valid:
module.fail_json(msg=msg)
result = dict(
changed=(changes > 0),
change_count=changes,
backupdest='',
)
# If not check mode and something changed, backup the original if necessary then write out the file or fail
if not module.check_mode and result['changed']:
# First, create a backup if desired.
if module.params['backup']:
result['backupdest'] = module.backup_local(fname)
try:
temp_file = NamedTemporaryFile(mode='w', dir=module.tmpdir, delete=False)
with open(temp_file.name, 'w') as fd:
fd.write(str(service))
except IOError:
module.fail_json(msg='Unable to create temporary \
file %s' % temp_file)
module.atomic_move(temp_file.name, os.path.realpath(fname))
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,497 |
pamd module: cannot remove line when it is the first line of the file
|
##### SUMMARY
I have a pam.d file which has no comments and only pam directives in it. I was attempting to remove the very first one, which is also the first line of the file. This results in an error. By adding a dummy comment to the file as the first line, I was able to successfully remove the line I needed.
My guess is that because the [append](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/pamd.py#L464) method does not set the `prev` and `next` linked-list properties when the line being append is the first line, the exception occurs. In the case of `if self._head is None`, `prev` and `next` should be set to `None`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pamd
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = None
configured module search path = ['/home/administrator/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venvs/ansible/lib/python3.5/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
administrator@administrator-PC:~$ ansible-config dump --only-changed
administrator@administrator-PC:~$
```
##### OS / ENVIRONMENT
Deepin 15.11, using the `ansible-playbook` command to run a local playbook
##### STEPS TO REPRODUCE
Example pam.d file that does not work (`deepin-auth-keyboard`):
```
auth [success=2 default=ignore] pam_lsass.so
auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass
auth requisite pam_deny.so
auth required pam_permit.so
```
Playbook task:
```yaml
- name: Remove lsass from Deepin pam.d
pamd:
name: deepin-auth-keyboard
type: auth
control: "[success=2 default=ignore]"
module_path: pam_lsass.so
state: absent
```
##### EXPECTED RESULTS
I expect that the first line of the file is removed
##### ACTUAL RESULTS
```
TASK [kerberos : Remove lsass from Deepin pam.d] ****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'PamdRule' object has no attribute 'prev'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 114, in <module>\n _ansiballz_main()\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/administrator/.ansible/tmp/ansible-tmp-1565652483.5214953-86228576212948/AnsiballZ_pamd.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/opt/venvs/ansible/lib/python3.5/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/opt/venvs/ansible/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 673, in exec_module\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 877, in <module>\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 843, in main\n File \"/tmp/ansible_pamd_payload_x6hjqy1u/__main__.py\", line 479, in remove\nAttributeError: 'PamdRule' object has no attribute 'prev'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/60497
|
https://github.com/ansible/ansible/pull/66398
|
e368f788f71c338cd3f049d5d6bdc643a51c0514
|
a4b59d021368285490f7cda50c11ac4f7a8030b5
| 2019-08-13T15:12:15Z |
python
| 2020-01-23T19:08:42Z |
test/units/modules/system/test_pamd.py
|
from __future__ import (absolute_import, division, print_function)
from units.compat import unittest
from ansible.modules.system.pamd import PamdRule
from ansible.modules.system.pamd import PamdLine
from ansible.modules.system.pamd import PamdComment
from ansible.modules.system.pamd import PamdInclude
from ansible.modules.system.pamd import PamdService
class PamdLineTestCase(unittest.TestCase):
def setUp(self):
self.pamd_line = PamdLine("This is a test")
def test_line(self):
self.assertEqual("This is a test", str(self.pamd_line))
def test_matches(self):
self.assertFalse(self.pamd_line.matches("test", "matches", "foo", "bar"))
class PamdIncludeTestCase(unittest.TestCase):
def setUp(self):
self.good_include = PamdInclude("@include foobar")
self.bad_include = PamdInclude("include foobar")
def test_line(self):
self.assertEqual("@include foobar", str(self.good_include))
def test_matches(self):
self.assertFalse(self.good_include.matches("something", "something", "dark", "side"))
def test_valid(self):
self.assertTrue(self.good_include.is_valid)
self.assertFalse(self.bad_include.is_valid)
class PamdCommentTestCase(unittest.TestCase):
def setUp(self):
self.good_comment = PamdComment("# This is a test comment")
self.bad_comment = PamdComment("This is a bad test comment")
def test_line(self):
self.assertEqual("# This is a test comment", str(self.good_comment))
def test_matches(self):
self.assertFalse(self.good_comment.matches("test", "matches", "foo", "bar"))
def test_valid(self):
self.assertTrue(self.good_comment.is_valid)
self.assertFalse(self.bad_comment.is_valid)
class PamdRuleTestCase(unittest.TestCase):
def setUp(self):
self.rule = PamdRule('account', 'optional', 'pam_keyinit.so', 'revoke')
def test_type(self):
self.assertEqual(self.rule.rule_type, 'account')
def test_control(self):
self.assertEqual(self.rule.rule_control, 'optional')
self.assertEqual(self.rule._control, 'optional')
def test_path(self):
self.assertEqual(self.rule.rule_path, 'pam_keyinit.so')
def test_args(self):
self.assertEqual(self.rule.rule_args, ['revoke'])
def test_valid(self):
self.assertTrue(self.rule.validate()[0])
class PamdRuleBadValidationTestCase(unittest.TestCase):
def setUp(self):
self.bad_type = PamdRule('foobar', 'optional', 'pam_keyinit.so', 'revoke')
self.bad_control_simple = PamdRule('account', 'foobar', 'pam_keyinit.so', 'revoke')
self.bad_control_value = PamdRule('account', '[foobar=1 default=ignore]', 'pam_keyinit.so', 'revoke')
self.bad_control_action = PamdRule('account', '[success=1 default=foobar]', 'pam_keyinit.so', 'revoke')
def test_validate_bad_type(self):
self.assertFalse(self.bad_type.validate()[0])
def test_validate_bad_control_simple(self):
self.assertFalse(self.bad_control_simple.validate()[0])
def test_validate_bad_control_value(self):
self.assertFalse(self.bad_control_value.validate()[0])
def test_validate_bad_control_action(self):
self.assertFalse(self.bad_control_action.validate()[0])
class PamdServiceTestCase(unittest.TestCase):
def setUp(self):
self.system_auth_string = """#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
@include common-auth
@include common-account
@include common-session
auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid
auth required pam_deny.so
# Test comment
auth sufficient pam_rootok.so
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid
account [success=1 default=ignore] \
pam_succeed_if.so user = vagrant use_uid quiet
account required pam_permit.so
account required pam_access.so listsep=,
session include system-auth
password requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session [success=1 test=me default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so"""
self.simple_system_auth_string = """#%PAM-1.0
auth required pam_env.so
"""
self.pamd = PamdService(self.system_auth_string)
def test_properly_parsed(self):
num_lines = len(self.system_auth_string.splitlines()) + 1
num_lines_processed = len(str(self.pamd).splitlines())
self.assertEqual(num_lines, num_lines_processed)
def test_has_rule(self):
self.assertTrue(self.pamd.has_rule('account', 'required', 'pam_permit.so'))
self.assertTrue(self.pamd.has_rule('account', '[success=1 default=ignore]', 'pam_succeed_if.so'))
def test_doesnt_have_rule(self):
self.assertFalse(self.pamd.has_rule('account', 'requisite', 'pam_permit.so'))
# Test Update
def test_update_rule_type(self):
self.assertTrue(self.pamd.update_rule('session', 'optional', 'pam_keyinit.so', new_type='account'))
self.assertTrue(self.pamd.has_rule('account', 'optional', 'pam_keyinit.so'))
test_rule = PamdRule('account', 'optional', 'pam_keyinit.so', 'revoke')
self.assertIn(str(test_rule), str(self.pamd))
def test_update_rule_that_doesnt_exist(self):
self.assertFalse(self.pamd.update_rule('blah', 'blah', 'blah', new_type='account'))
self.assertFalse(self.pamd.has_rule('blah', 'blah', 'blah'))
test_rule = PamdRule('blah', 'blah', 'blah', 'account')
self.assertNotIn(str(test_rule), str(self.pamd))
def test_update_rule_type_two(self):
self.assertTrue(self.pamd.update_rule('session', '[success=1 default=ignore]', 'pam_succeed_if.so', new_type='account'))
self.assertTrue(self.pamd.has_rule('account', '[success=1 default=ignore]', 'pam_succeed_if.so'))
test_rule = PamdRule('account', '[success=1 default=ignore]', 'pam_succeed_if.so')
self.assertIn(str(test_rule), str(self.pamd))
def test_update_rule_control_simple(self):
self.assertTrue(self.pamd.update_rule('session', 'optional', 'pam_keyinit.so', new_control='required'))
self.assertTrue(self.pamd.has_rule('session', 'required', 'pam_keyinit.so'))
test_rule = PamdRule('session', 'required', 'pam_keyinit.so')
self.assertIn(str(test_rule), str(self.pamd))
def test_update_rule_control_complex(self):
self.assertTrue(self.pamd.update_rule('session',
'[success=1 default=ignore]',
'pam_succeed_if.so',
new_control='[success=2 test=me default=ignore]'))
self.assertTrue(self.pamd.has_rule('session', '[success=2 test=me default=ignore]', 'pam_succeed_if.so'))
test_rule = PamdRule('session', '[success=2 test=me default=ignore]', 'pam_succeed_if.so')
self.assertIn(str(test_rule), str(self.pamd))
def test_update_rule_control_more_complex(self):
self.assertTrue(self.pamd.update_rule('session',
'[success=1 test=me default=ignore]',
'pam_succeed_if.so',
new_control='[success=2 test=me default=ignore]'))
self.assertTrue(self.pamd.has_rule('session', '[success=2 test=me default=ignore]', 'pam_succeed_if.so'))
test_rule = PamdRule('session', '[success=2 test=me default=ignore]', 'pam_succeed_if.so')
self.assertIn(str(test_rule), str(self.pamd))
def test_update_rule_module_path(self):
self.assertTrue(self.pamd.update_rule('auth', 'required', 'pam_env.so', new_path='pam_limits.so'))
self.assertTrue(self.pamd.has_rule('auth', 'required', 'pam_limits.so'))
def test_update_rule_module_path_slash(self):
self.assertTrue(self.pamd.update_rule('auth', 'required', 'pam_env.so', new_path='/lib64/security/pam_duo.so'))
self.assertTrue(self.pamd.has_rule('auth', 'required', '/lib64/security/pam_duo.so'))
def test_update_rule_module_args(self):
self.assertTrue(self.pamd.update_rule('auth', 'sufficient', 'pam_unix.so', new_args='uid uid'))
test_rule = PamdRule('auth', 'sufficient', 'pam_unix.so', 'uid uid')
self.assertIn(str(test_rule), str(self.pamd))
test_rule = PamdRule('auth', 'sufficient', 'pam_unix.so', 'nullok try_first_pass')
self.assertNotIn(str(test_rule), str(self.pamd))
def test_update_first_three(self):
self.assertTrue(self.pamd.update_rule('auth', 'required', 'pam_env.so',
new_type='one', new_control='two', new_path='three'))
self.assertTrue(self.pamd.has_rule('one', 'two', 'three'))
def test_update_first_three_with_module_args(self):
self.assertTrue(self.pamd.update_rule('auth', 'sufficient', 'pam_unix.so',
new_type='one', new_control='two', new_path='three'))
self.assertTrue(self.pamd.has_rule('one', 'two', 'three'))
test_rule = PamdRule('one', 'two', 'three')
self.assertIn(str(test_rule), str(self.pamd))
self.assertIn(str(test_rule), str(self.pamd))
def test_update_all_four(self):
self.assertTrue(self.pamd.update_rule('auth', 'sufficient', 'pam_unix.so',
new_type='one', new_control='two', new_path='three',
new_args='four five'))
test_rule = PamdRule('one', 'two', 'three', 'four five')
self.assertIn(str(test_rule), str(self.pamd))
test_rule = PamdRule('auth', 'sufficient', 'pam_unix.so', 'nullok try_first_pass')
self.assertNotIn(str(test_rule), str(self.pamd))
def test_update_rule_with_slash(self):
self.assertTrue(self.pamd.update_rule('account', '[success=1 default=ignore]', 'pam_succeed_if.so',
new_type='session', new_path='pam_access.so'))
test_rule = PamdRule('session', '[success=1 default=ignore]', 'pam_access.so')
self.assertIn(str(test_rule), str(self.pamd))
# Insert Before
def test_insert_before_rule(self):
count = self.pamd.insert_before('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_limits.so')
self.assertEqual(count, 1)
rules = self.pamd.get("account", "required", "pam_access.so")
for current_rule in rules:
self.assertTrue(current_rule.prev.matches("account", "required", "pam_limits.so"))
def test_insert_before_rule_where_rule_doesnt_exist(self):
count = self.pamd.insert_before('account', 'sufficient', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_limits.so')
self.assertFalse(count)
def test_insert_before_rule_with_args(self):
self.assertTrue(self.pamd.insert_before('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_limits.so',
new_args='uid'))
rules = self.pamd.get("account", "required", "pam_access.so")
for current_rule in rules:
self.assertTrue(current_rule.prev.matches("account", "required", "pam_limits.so", 'uid'))
def test_insert_before_rule_test_duplicates(self):
self.assertTrue(self.pamd.insert_before('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_limits.so'))
self.pamd.insert_before('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_limits.so')
rules = self.pamd.get("account", "required", "pam_access.so")
for current_rule in rules:
previous_rule = current_rule.prev
self.assertTrue(previous_rule.matches("account", "required", "pam_limits.so"))
self.assertFalse(previous_rule.prev.matches("account", "required", "pam_limits.so"))
def test_insert_before_first_rule(self):
self.assertTrue(self.pamd.insert_before('auth', 'required', 'pam_env.so',
new_type='account', new_control='required', new_path='pam_limits.so'))
def test_insert_before_first_rule_simple(self):
simple_service = PamdService(self.simple_system_auth_string)
self.assertTrue(simple_service.insert_before('auth', 'required', 'pam_env.so',
new_type='account', new_control='required', new_path='pam_limits.so'))
# Insert After
def test_insert_after_rule(self):
self.assertTrue(self.pamd.insert_after('account', 'required', 'pam_unix.so',
new_type='account', new_control='required', new_path='pam_permit.so'))
rules = self.pamd.get("account", "required", "pam_unix.so")
for current_rule in rules:
self.assertTrue(current_rule.next.matches("account", "required", "pam_permit.so"))
def test_insert_after_rule_with_args(self):
self.assertTrue(self.pamd.insert_after('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_permit.so',
new_args='uid'))
rules = self.pamd.get("account", "required", "pam_access.so")
for current_rule in rules:
self.assertTrue(current_rule.next.matches("account", "required", "pam_permit.so", "uid"))
def test_insert_after_test_duplicates(self):
self.assertTrue(self.pamd.insert_after('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_permit.so',
new_args='uid'))
self.assertFalse(self.pamd.insert_after('account', 'required', 'pam_access.so',
new_type='account', new_control='required', new_path='pam_permit.so',
new_args='uid'))
rules = self.pamd.get("account", "required", "pam_access.so")
for current_rule in rules:
self.assertTrue(current_rule.next.matches("account", "required", "pam_permit.so", "uid"))
self.assertFalse(current_rule.next.next.matches("account", "required", "pam_permit.so", "uid"))
def test_insert_after_rule_last_rule(self):
self.assertTrue(self.pamd.insert_after('session', 'required', 'pam_unix.so',
new_type='account', new_control='required', new_path='pam_permit.so',
new_args='uid'))
rules = self.pamd.get("session", "required", "pam_unix.so")
for current_rule in rules:
self.assertTrue(current_rule.next.matches("account", "required", "pam_permit.so", "uid"))
# Remove Module Arguments
def test_remove_module_arguments_one(self):
self.assertTrue(self.pamd.remove_module_arguments('auth', 'sufficient', 'pam_unix.so', 'nullok'))
def test_remove_module_arguments_one_list(self):
self.assertTrue(self.pamd.remove_module_arguments('auth', 'sufficient', 'pam_unix.so', ['nullok']))
def test_remove_module_arguments_two(self):
self.assertTrue(self.pamd.remove_module_arguments('session', '[success=1 default=ignore]', 'pam_succeed_if.so', 'service crond'))
def test_remove_module_arguments_two_list(self):
self.assertTrue(self.pamd.remove_module_arguments('session', '[success=1 default=ignore]', 'pam_succeed_if.so', ['service', 'crond']))
def test_remove_module_arguments_where_none_existed(self):
self.assertTrue(self.pamd.add_module_arguments('session', 'required', 'pam_limits.so', 'arg1 arg2= arg3=arg3'))
def test_add_module_arguments_where_none_existed(self):
self.assertTrue(self.pamd.add_module_arguments('account', 'required', 'pam_unix.so', 'arg1 arg2= arg3=arg3'))
def test_add_module_arguments_where_none_existed_list(self):
self.assertTrue(self.pamd.add_module_arguments('account', 'required', 'pam_unix.so', ['arg1', 'arg2=', 'arg3=arg3']))
def test_add_module_arguments_where_some_existed(self):
self.assertTrue(self.pamd.add_module_arguments('auth', 'sufficient', 'pam_unix.so', 'arg1 arg2= arg3=arg3'))
def test_remove_rule(self):
self.assertTrue(self.pamd.remove('account', 'required', 'pam_unix.so'))
# Second run should not change anything
self.assertFalse(self.pamd.remove('account', 'required', 'pam_unix.so'))
test_rule = PamdRule('account', 'required', 'pam_unix.so')
self.assertNotIn(str(test_rule), str(self.pamd))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,725 |
ansible galaxy install exception on empty requirements.yml
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
exception `ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'keys'` if requirements.yml is empty
2.8.x works as expected.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
# empty requirements.yml
---
```
run
```
ansible-galaxy install -r requirements.yml -vvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ansible 2.8
```
$ ansible-galaxy install -r requirements.yml
ERROR! No roles found in file: requirements.yml
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ansible 2.9
```
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 857, in execute_install
roles_left = self._parse_requirements_file(role_file)['roles']
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 461, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
AttributeError: 'NoneType' object has no attribute 'keys'
```
|
https://github.com/ansible/ansible/issues/66725
|
https://github.com/ansible/ansible/pull/66726
|
5c721e8a47848543b4e111783235eafff221666c
|
9e8fb5b7f535dabfe9cb365091bab7831e5ae5f2
| 2020-01-23T15:00:48Z |
python
| 2020-01-23T20:06:44Z |
changelogs/fragments/66726-galaxy-fix-attribute-error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,725 |
ansible galaxy install exception on empty requirements.yml
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
exception `ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'keys'` if requirements.yml is empty
2.8.x works as expected.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
# empty requirements.yml
---
```
run
```
ansible-galaxy install -r requirements.yml -vvv
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ansible 2.8
```
$ ansible-galaxy install -r requirements.yml
ERROR! No roles found in file: requirements.yml
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ansible 2.9
```
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'keys'
the full traceback was:
Traceback (most recent call last):
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 857, in execute_install
roles_left = self._parse_requirements_file(role_file)['roles']
File "/home/rmoser/.local/share/virtualenvs/ansible-puzzle-SlJRgz1S/lib/python3.6/site-packages/ansible/cli/galaxy.py", line 461, in _parse_requirements_file
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
AttributeError: 'NoneType' object has no attribute 'keys'
```
|
https://github.com/ansible/ansible/issues/66725
|
https://github.com/ansible/ansible/pull/66726
|
5c721e8a47848543b4e111783235eafff221666c
|
9e8fb5b7f535dabfe9cb365091bab7831e5ae5f2
| 2020-01-23T15:00:48Z |
python
| 2020-01-23T20:06:44Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from jinja2 import BaseLoader, Environment, FileSystemLoader
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
)
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self.api_servers = []
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to '
'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_login_options(role_parser, parents=[common])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each role installed in the roles_path.')
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument('role', help='Role', nargs='?', metavar='role')
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_login_options(self, parser, parents=None):
login_parser = parser.add_parser('login', parents=parents,
help="Login to api.github.com server in order to use ansible-galaxy role sub "
"command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=C.COLLECTIONS_PATHS[0],
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
else:
install_parser.add_argument('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False),
('auth_url', False)]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_key in server_list:
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=not context.CLIARGS['ignore_certs'])
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token))
context.CLIARGS['func']()
@property
def api(self):
return self.api_servers[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if requirements_file is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles', []):
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections', []):
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy, "explicit_requirement_%s" % req_name, req_source))
requirements['collections'].append((req_name, req_version, req_source))
else:
requirements['collections'].append((collection_req, '*', None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
text.append(u"\tdescription: %s" % role_info.get('description', ''))
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
def to_yaml(v):
return yaml.safe_dump(v, default_flow_style=False).rstrip()
env = Environment(loader=BaseLoader)
env.filters['comment_ify'] = comment_ify
env.filters['to_yaml'] = to_yaml
template = env.from_string(meta_template)
meta_value = template.render({'required_config': required_config, 'optional_config': optional_config})
return meta_value
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
template_env = Environment(loader=FileSystemLoader(obj_skeleton))
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
elif galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(rel_root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_env.get_template(src_template).stream(inject_data).dump(dest_file, encoding='utf-8')
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
# FIXME: This is broken in both 1.9 and 2.0 as
# _display_role_info() always returns something
if not data:
data = u"\n- the role %s was not found" % role
self.pager(data)
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections']
else:
requirements = []
for collection_input in collections:
requirement = None
if os.path.isfile(to_bytes(collection_input, errors='surrogate_or_strict')) or \
urlparse(collection_input).scheme.lower() in ['http', 'https']:
# Arg is a file path or URL to a collection
name = collection_input
else:
name, dummy, requirement = collection_input.partition(':')
requirements.append((name, requirement or '*', None))
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
output_path = validate_collection_path(output_path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps)
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
if not (role_file.endswith('.yaml') or role_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
roles_left = self._parse_requirements_file(role_file)['roles']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, self.api, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata.get('dependencies') or []
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
lists the roles installed on the local system or matches a single role passed as an argument.
"""
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
if context.CLIARGS['role']:
# show the requested role, if it exists
name = context.CLIARGS['role']
gr = GalaxyRole(self.galaxy, self.api, name)
if gr.metadata:
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
else:
display.display("- the role %s was not found" % name)
else:
# show all valid roles in the roles_path directory
roles_path = context.CLIARGS['roles_path']
path_found = False
warnings = []
for path in roles_path:
role_path = os.path.expanduser(path)
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
elif not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
path_found = True
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths was usable. Please specify a valid path with --roles-path")
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,567 |
'mysql_user' module is not idempotent on Python 3.7
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The `mysql_user` module always reports the "changed" state on hosts with Python 3.7 set as the inrerpreter. The changed message points to https://github.com/ansible/ansible/commit/9c5275092f4759c87d4cd938a73368955934b9b9 commit as the possible culprit; the changed state shows up on Ansible 2.8.0+, but not on Ansible 2.7.0+. The actual functionality (creation of the MySQL user account, setting password, etc.) seems to work fine.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mysql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = ['/home/drybjed/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/drybjed/.local/lib/python3.7/site-packages/ansible
executable location = /home/drybjed/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
No changes
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
The Ansible Controller is a Debian Buster (10) host, the working remote host is a Debian Stretch (9) host with Python 2.7 and 3.5 used as the interpreters (no changes). The faulty remote host is a Debian Buster (10) host with Python 3.7 as the interpreter.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
become: True
tasks:
- name: Manage MySQL account
mysql_user:
name: 'test-user'
host: 'localhost'
password: 'test-password'
state: 'present'
- name: Manage MySQL account
mysql_user:
name: 'test-user'
host: 'localhost'
password: 'test-password'
state: 'present'
register: output
- debug: var=output
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************
ok: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
ok: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
ok: [remote]
TASK [debug] ****************************************************************************************************************************************
ok: [remote] => {
"output": {
"changed": false,
"failed": false,
"msg": "User unchanged",
"user": "test-user"
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************
ok: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
changed: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
changed: [remote]
TASK [debug] ****************************************************************************************************************************************
ok: [remote] => {
"output": {
"changed": true,
"failed": false,
"msg": "Password updated (new style)",
"user": "test-user"
}
}
```
|
https://github.com/ansible/ansible/issues/60567
|
https://github.com/ansible/ansible/pull/64059
|
9df9ed4cd3c7a41bf8de07fc2f83002dad660408
|
4d1e21bf18e2171549156dd78845a22ba4bdb5fa
| 2019-08-14T12:08:55Z |
python
| 2020-01-24T18:32:15Z |
changelogs/fragments/64059-mysql_user_fix_password_comparison.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,567 |
'mysql_user' module is not idempotent on Python 3.7
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The `mysql_user` module always reports the "changed" state on hosts with Python 3.7 set as the inrerpreter. The changed message points to https://github.com/ansible/ansible/commit/9c5275092f4759c87d4cd938a73368955934b9b9 commit as the possible culprit; the changed state shows up on Ansible 2.8.0+, but not on Ansible 2.7.0+. The actual functionality (creation of the MySQL user account, setting password, etc.) seems to work fine.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mysql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = ['/home/drybjed/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/drybjed/.local/lib/python3.7/site-packages/ansible
executable location = /home/drybjed/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
No changes
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
The Ansible Controller is a Debian Buster (10) host, the working remote host is a Debian Stretch (9) host with Python 2.7 and 3.5 used as the interpreters (no changes). The faulty remote host is a Debian Buster (10) host with Python 3.7 as the interpreter.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
become: True
tasks:
- name: Manage MySQL account
mysql_user:
name: 'test-user'
host: 'localhost'
password: 'test-password'
state: 'present'
- name: Manage MySQL account
mysql_user:
name: 'test-user'
host: 'localhost'
password: 'test-password'
state: 'present'
register: output
- debug: var=output
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************
ok: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
ok: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
ok: [remote]
TASK [debug] ****************************************************************************************************************************************
ok: [remote] => {
"output": {
"changed": false,
"failed": false,
"msg": "User unchanged",
"user": "test-user"
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************
ok: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
changed: [remote]
TASK [Manage MySQL account] *************************************************************************************************************************
changed: [remote]
TASK [debug] ****************************************************************************************************************************************
ok: [remote] => {
"output": {
"changed": true,
"failed": false,
"msg": "Password updated (new style)",
"user": "test-user"
}
}
```
|
https://github.com/ansible/ansible/issues/60567
|
https://github.com/ansible/ansible/pull/64059
|
9df9ed4cd3c7a41bf8de07fc2f83002dad660408
|
4d1e21bf18e2171549156dd78845a22ba4bdb5fa
| 2019-08-14T12:08:55Z |
python
| 2020-01-24T18:32:15Z |
lib/ansible/modules/database/mysql/mysql_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Mark Theunissen <[email protected]>
# Sponsored by Four Kitchens http://fourkitchens.com.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: mysql_user
short_description: Adds or removes a user from a MySQL database
description:
- Adds or removes a user from a MySQL database.
version_added: "0.6"
options:
name:
description:
- Name of the user (role) to add or remove.
type: str
required: true
password:
description:
- Set the user's password..
type: str
encrypted:
description:
- Indicate that the 'password' field is a `mysql_native_password` hash.
type: bool
default: no
version_added: "2.0"
host:
description:
- The 'host' part of the MySQL username.
type: str
default: localhost
host_all:
description:
- Override the host option, making ansible apply changes to all hostnames for a given user.
- This option cannot be used when creating users.
type: bool
default: no
version_added: "2.1"
priv:
description:
- "MySQL privileges string in the format: C(db.table:priv1,priv2)."
- "Multiple privileges can be specified by separating each one using
a forward slash: C(db.table:priv/db.table:priv)."
- The format is based on MySQL C(GRANT) statement.
- Database and table names can be quoted, MySQL-style.
- If column privileges are used, the C(priv1,priv2) part must be
exactly as returned by a C(SHOW GRANT) statement. If not followed,
the module will always report changes. It includes grouping columns
by permission (C(SELECT(col1,col2)) instead of C(SELECT(col1),SELECT(col2))).
type: str
append_privs:
description:
- Append the privileges defined by priv to the existing ones for this
user instead of overwriting existing ones.
type: bool
default: no
version_added: "1.4"
sql_log_bin:
description:
- Whether binary logging should be enabled or disabled for the connection.
type: bool
default: yes
version_added: "2.1"
state:
description:
- Whether the user should exist.
- When C(absent), removes the user.
type: str
choices: [ absent, present ]
default: present
check_implicit_admin:
description:
- Check if mysql allows login as root/nopassword before trying supplied credentials.
type: bool
default: no
version_added: "1.3"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "2.0"
plugin:
description:
- User's plugin to authenticate (``CREATE USER user IDENTIFIED WITH plugin``).
type: str
version_added: '2.10'
plugin_hash_string:
description:
- User's plugin hash string (``CREATE USER user IDENTIFIED WITH plugin AS plugin_hash_string``).
type: str
version_added: '2.10'
plugin_auth_string:
description:
- User's plugin auth_string (``CREATE USER user IDENTIFIED WITH plugin BY plugin_auth_string``).
type: str
version_added: '2.10'
notes:
- "MySQL server installs with default login_user of 'root' and no password. To secure this user
as part of an idempotent playbook, you must create at least two tasks: the first must change the root user's password,
without providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing
the new root credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from
the file."
- Currently, there is only support for the `mysql_native_password` encrypted password hash module.
seealso:
- module: mysql_info
- name: MySQL access control and account management reference
description: Complete reference of the MySQL access control and account management documentation.
link: https://dev.mysql.com/doc/refman/8.0/en/access-control.html
author:
- Jonathan Mainguy (@Jmainguy)
- Benjamin Malynovytch (@bmalynovytch)
- Lukasz Tomaszkiewicz (@tomaszkiewicz)
extends_documentation_fragment: mysql
'''
EXAMPLES = r'''
- name: Removes anonymous user account for localhost
mysql_user:
name: ''
host: localhost
state: absent
- name: Removes all anonymous user accounts
mysql_user:
name: ''
host_all: yes
state: absent
- name: Create database user with name 'bob' and password '12345' with all database privileges
mysql_user:
name: bob
password: 12345
priv: '*.*:ALL'
state: present
- name: Create database user using hashed password with all database privileges
mysql_user:
name: bob
password: '*EE0D72C1085C46C5278932678FBE2C6A782821B4'
encrypted: yes
priv: '*.*:ALL'
state: present
- name: Create database user with password and all database privileges and 'WITH GRANT OPTION'
mysql_user:
name: bob
password: 12345
priv: '*.*:ALL,GRANT'
state: present
# Note that REQUIRESSL is a special privilege that should only apply to *.* by itself.
- name: Modify user to require SSL connections.
mysql_user:
name: bob
append_privs: yes
priv: '*.*:REQUIRESSL'
state: present
- name: Ensure no user named 'sally'@'localhost' exists, also passing in the auth credentials.
mysql_user:
login_user: root
login_password: 123456
name: sally
state: absent
- name: Ensure no user named 'sally' exists at all
mysql_user:
name: sally
host_all: yes
state: absent
- name: Specify grants composed of more than one word
mysql_user:
name: replication
password: 12345
priv: "*.*:REPLICATION CLIENT"
state: present
- name: Revoke all privileges for user 'bob' and password '12345'
mysql_user:
name: bob
password: 12345
priv: "*.*:USAGE"
state: present
# Example privileges string format
# mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanotherdb.*:ALL
- name: Example using login_unix_socket to connect to server
mysql_user:
name: root
password: abc123
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Example of skipping binary logging while adding user 'bob'
mysql_user:
name: bob
password: 12345
priv: "*.*:USAGE"
state: present
sql_log_bin: no
- name: Create user 'bob' authenticated with plugin 'AWSAuthenticationPlugin'
mysql_user:
name: bob
plugin: AWSAuthenticationPlugin
plugin_hash_string: RDS
priv: '*.*:ALL'
state: present
# Example .my.cnf file for setting the root password
# [client]
# user=root
# password=n<_665{vS43y
'''
import re
import string
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.database import SQLParseError
from ansible.module_utils.mysql import mysql_connect, mysql_driver, mysql_driver_fail_msg
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native
VALID_PRIVS = frozenset(('CREATE', 'DROP', 'GRANT', 'GRANT OPTION',
'LOCK TABLES', 'REFERENCES', 'EVENT', 'ALTER',
'DELETE', 'INDEX', 'INSERT', 'SELECT', 'UPDATE',
'CREATE TEMPORARY TABLES', 'TRIGGER', 'CREATE VIEW',
'SHOW VIEW', 'ALTER ROUTINE', 'CREATE ROUTINE',
'EXECUTE', 'FILE', 'CREATE TABLESPACE', 'CREATE USER',
'PROCESS', 'PROXY', 'RELOAD', 'REPLICATION CLIENT',
'REPLICATION SLAVE', 'SHOW DATABASES', 'SHUTDOWN',
'SUPER', 'ALL', 'ALL PRIVILEGES', 'USAGE', 'REQUIRESSL',
'CREATE ROLE', 'DROP ROLE', 'APPLICATION PASSWORD ADMIN',
'AUDIT ADMIN', 'BACKUP ADMIN', 'BINLOG ADMIN',
'BINLOG ENCRYPTION ADMIN', 'CONNECTION ADMIN',
'ENCRYPTION KEY ADMIN', 'FIREWALL ADMIN', 'FIREWALL USER',
'GROUP REPLICATION ADMIN', 'PERSIST RO VARIABLES ADMIN',
'REPLICATION SLAVE ADMIN', 'RESOURCE GROUP ADMIN',
'RESOURCE GROUP USER', 'ROLE ADMIN', 'SET USER ID',
'SESSION VARIABLES ADMIN', 'SYSTEM VARIABLES ADMIN',
'VERSION TOKEN ADMIN', 'XA RECOVER ADMIN',
'LOAD FROM S3', 'SELECT INTO S3'))
class InvalidPrivsError(Exception):
pass
# ===========================================
# MySQL module specific support methods.
#
# User Authentication Management changed in MySQL 5.7 and MariaDB 10.2.0
def use_old_user_mgmt(cursor):
cursor.execute("SELECT VERSION()")
result = cursor.fetchone()
version_str = result[0]
version = version_str.split('.')
if 'mariadb' in version_str.lower():
# Prior to MariaDB 10.2
if int(version[0]) * 1000 + int(version[1]) < 10002:
return True
else:
return False
else:
# Prior to MySQL 5.7
if int(version[0]) * 1000 + int(version[1]) < 5007:
return True
else:
return False
def get_mode(cursor):
cursor.execute('SELECT @@GLOBAL.sql_mode')
result = cursor.fetchone()
mode_str = result[0]
if 'ANSI' in mode_str:
mode = 'ANSI'
else:
mode = 'NOTANSI'
return mode
def user_exists(cursor, user, host, host_all):
if host_all:
cursor.execute("SELECT count(*) FROM mysql.user WHERE user = %s", ([user]))
else:
cursor.execute("SELECT count(*) FROM mysql.user WHERE user = %s AND host = %s", (user, host))
count = cursor.fetchone()
return count[0] > 0
def user_add(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string, new_priv, check_mode):
# we cannot create users without a proper hostname
if host_all:
return False
if check_mode:
return True
if password and encrypted:
cursor.execute("CREATE USER %s@%s IDENTIFIED BY PASSWORD %s", (user, host, password))
elif password and not encrypted:
cursor.execute("CREATE USER %s@%s IDENTIFIED BY %s", (user, host, password))
elif plugin and plugin_hash_string:
cursor.execute("CREATE USER %s@%s IDENTIFIED WITH %s AS %s", (user, host, plugin, plugin_hash_string))
elif plugin and plugin_auth_string:
cursor.execute("CREATE USER %s@%s IDENTIFIED WITH %s BY %s", (user, host, plugin, plugin_auth_string))
elif plugin:
cursor.execute("CREATE USER %s@%s IDENTIFIED WITH %s", (user, host, plugin))
else:
cursor.execute("CREATE USER %s@%s", (user, host))
if new_priv is not None:
for db_table, priv in iteritems(new_priv):
privileges_grant(cursor, user, host, db_table, priv)
return True
def is_hash(password):
ishash = False
if len(password) == 41 and password[0] == '*':
if frozenset(password[1:]).issubset(string.hexdigits):
ishash = True
return ishash
def user_mod(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string, new_priv, append_privs, module):
changed = False
msg = "User unchanged"
grant_option = False
if host_all:
hostnames = user_get_hostnames(cursor, [user])
else:
hostnames = [host]
for host in hostnames:
# Handle clear text and hashed passwords.
if bool(password):
# Determine what user management method server uses
old_user_mgmt = use_old_user_mgmt(cursor)
# Get a list of valid columns in mysql.user table to check if Password and/or authentication_string exist
cursor.execute("""
SELECT COLUMN_NAME FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME IN ('Password', 'authentication_string')
ORDER BY COLUMN_NAME DESC LIMIT 1
""")
colA = cursor.fetchone()
cursor.execute("""
SELECT COLUMN_NAME FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME IN ('Password', 'authentication_string')
ORDER BY COLUMN_NAME ASC LIMIT 1
""")
colB = cursor.fetchone()
# Select hash from either Password or authentication_string, depending which one exists and/or is filled
cursor.execute("""
SELECT COALESCE(
CASE WHEN %s = '' THEN NULL ELSE %s END,
CASE WHEN %s = '' THEN NULL ELSE %s END
)
FROM mysql.user WHERE user = %%s AND host = %%s
""" % (colA[0], colA[0], colB[0], colB[0]), (user, host))
current_pass_hash = cursor.fetchone()[0]
if encrypted:
encrypted_password = password
if not is_hash(encrypted_password):
module.fail_json(msg="encrypted was specified however it does not appear to be a valid hash expecting: *SHA1(SHA1(your_password))")
else:
if old_user_mgmt:
cursor.execute("SELECT PASSWORD(%s)", (password,))
else:
cursor.execute("SELECT CONCAT('*', UCASE(SHA1(UNHEX(SHA1(%s)))))", (password,))
encrypted_password = cursor.fetchone()[0]
if current_pass_hash != encrypted_password:
msg = "Password updated"
if module.check_mode:
return (True, msg)
if old_user_mgmt:
cursor.execute("SET PASSWORD FOR %s@%s = %s", (user, host, encrypted_password))
msg = "Password updated (old style)"
else:
try:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH mysql_native_password AS %s", (user, host, encrypted_password))
msg = "Password updated (new style)"
except (mysql_driver.Error) as e:
# https://stackoverflow.com/questions/51600000/authentication-string-of-root-user-on-mysql
# Replacing empty root password with new authentication mechanisms fails with error 1396
if e.args[0] == 1396:
cursor.execute(
"UPDATE user SET plugin = %s, authentication_string = %s, Password = '' WHERE User = %s AND Host = %s",
('mysql_native_password', encrypted_password, user, host)
)
cursor.execute("FLUSH PRIVILEGES")
msg = "Password forced update"
else:
raise e
changed = True
# Handle plugin authentication
if plugin:
cursor.execute("SELECT plugin, authentication_string FROM mysql.user "
"WHERE user = %s AND host = %s", (user, host))
current_plugin = cursor.fetchone()
update = False
if current_plugin[0] != plugin:
update = True
if plugin_hash_string and current_plugin[1] != plugin_hash_string:
update = True
if plugin_auth_string and current_plugin[1] != plugin_auth_string:
# this case can cause more updates than expected,
# as plugin can hash auth_string in any way it wants
# and there's no way to figure it out for
# a check, so I prefer to update more often than never
update = True
if update:
if plugin_hash_string:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH %s AS %s", (user, host, plugin, plugin_hash_string))
elif plugin_auth_string:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH %s BY %s", (user, host, plugin, plugin_auth_string))
else:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH %s", (user, host, plugin))
changed = True
# Handle privileges
if new_priv is not None:
curr_priv = privileges_get(cursor, user, host)
# If the user has privileges on a db.table that doesn't appear at all in
# the new specification, then revoke all privileges on it.
for db_table, priv in iteritems(curr_priv):
# If the user has the GRANT OPTION on a db.table, revoke it first.
if "GRANT" in priv:
grant_option = True
if db_table not in new_priv:
if user != "root" and "PROXY" not in priv and not append_privs:
msg = "Privileges updated"
if module.check_mode:
return (True, msg)
privileges_revoke(cursor, user, host, db_table, priv, grant_option)
changed = True
# If the user doesn't currently have any privileges on a db.table, then
# we can perform a straight grant operation.
for db_table, priv in iteritems(new_priv):
if db_table not in curr_priv:
msg = "New privileges granted"
if module.check_mode:
return (True, msg)
privileges_grant(cursor, user, host, db_table, priv)
changed = True
# If the db.table specification exists in both the user's current privileges
# and in the new privileges, then we need to see if there's a difference.
db_table_intersect = set(new_priv.keys()) & set(curr_priv.keys())
for db_table in db_table_intersect:
priv_diff = set(new_priv[db_table]) ^ set(curr_priv[db_table])
if len(priv_diff) > 0:
msg = "Privileges updated"
if module.check_mode:
return (True, msg)
if not append_privs:
privileges_revoke(cursor, user, host, db_table, curr_priv[db_table], grant_option)
privileges_grant(cursor, user, host, db_table, new_priv[db_table])
changed = True
return (changed, msg)
def user_delete(cursor, user, host, host_all, check_mode):
if check_mode:
return True
if host_all:
hostnames = user_get_hostnames(cursor, [user])
for hostname in hostnames:
cursor.execute("DROP USER %s@%s", (user, hostname))
else:
cursor.execute("DROP USER %s@%s", (user, host))
return True
def user_get_hostnames(cursor, user):
cursor.execute("SELECT Host FROM mysql.user WHERE user = %s", user)
hostnames_raw = cursor.fetchall()
hostnames = []
for hostname_raw in hostnames_raw:
hostnames.append(hostname_raw[0])
return hostnames
def privileges_get(cursor, user, host):
""" MySQL doesn't have a better method of getting privileges aside from the
SHOW GRANTS query syntax, which requires us to then parse the returned string.
Here's an example of the string that is returned from MySQL:
GRANT USAGE ON *.* TO 'user'@'localhost' IDENTIFIED BY 'pass';
This function makes the query and returns a dictionary containing the results.
The dictionary format is the same as that returned by privileges_unpack() below.
"""
output = {}
cursor.execute("SHOW GRANTS FOR %s@%s", (user, host))
grants = cursor.fetchall()
def pick(x):
if x == 'ALL PRIVILEGES':
return 'ALL'
else:
return x
for grant in grants:
res = re.match("""GRANT (.+) ON (.+) TO (['`"]).*\\3@(['`"]).*\\4( IDENTIFIED BY PASSWORD (['`"]).+\\6)? ?(.*)""", grant[0])
if res is None:
raise InvalidPrivsError('unable to parse the MySQL grant string: %s' % grant[0])
privileges = res.group(1).split(", ")
privileges = [pick(x) for x in privileges]
if "WITH GRANT OPTION" in res.group(7):
privileges.append('GRANT')
if "REQUIRE SSL" in res.group(7):
privileges.append('REQUIRESSL')
db = res.group(2)
output[db] = privileges
return output
def privileges_unpack(priv, mode):
""" Take a privileges string, typically passed as a parameter, and unserialize
it into a dictionary, the same format as privileges_get() above. We have this
custom format to avoid using YAML/JSON strings inside YAML playbooks. Example
of a privileges string:
mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanother.*:ALL
The privilege USAGE stands for no privileges, so we add that in on *.* if it's
not specified in the string, as MySQL will always provide this by default.
"""
if mode == 'ANSI':
quote = '"'
else:
quote = '`'
output = {}
privs = []
for item in priv.strip().split('/'):
pieces = item.strip().rsplit(':', 1)
dbpriv = pieces[0].rsplit(".", 1)
# Check for FUNCTION or PROCEDURE object types
parts = dbpriv[0].split(" ", 1)
object_type = ''
if len(parts) > 1 and (parts[0] == 'FUNCTION' or parts[0] == 'PROCEDURE'):
object_type = parts[0] + ' '
dbpriv[0] = parts[1]
# Do not escape if privilege is for database or table, i.e.
# neither quote *. nor .*
for i, side in enumerate(dbpriv):
if side.strip('`') != '*':
dbpriv[i] = '%s%s%s' % (quote, side.strip('`'), quote)
pieces[0] = object_type + '.'.join(dbpriv)
if '(' in pieces[1]:
output[pieces[0]] = re.split(r',\s*(?=[^)]*(?:\(|$))', pieces[1].upper())
for i in output[pieces[0]]:
privs.append(re.sub(r'\s*\(.*\)', '', i))
else:
output[pieces[0]] = pieces[1].upper().split(',')
privs = output[pieces[0]]
new_privs = frozenset(privs)
if not new_privs.issubset(VALID_PRIVS):
raise InvalidPrivsError('Invalid privileges specified: %s' % new_privs.difference(VALID_PRIVS))
if '*.*' not in output:
output['*.*'] = ['USAGE']
# if we are only specifying something like REQUIRESSL and/or GRANT (=WITH GRANT OPTION) in *.*
# we still need to add USAGE as a privilege to avoid syntax errors
if 'REQUIRESSL' in priv and not set(output['*.*']).difference(set(['GRANT', 'REQUIRESSL'])):
output['*.*'].append('USAGE')
return output
def privileges_revoke(cursor, user, host, db_table, priv, grant_option):
# Escape '%' since mysql db.execute() uses a format string
db_table = db_table.replace('%', '%%')
if grant_option:
query = ["REVOKE GRANT OPTION ON %s" % db_table]
query.append("FROM %s@%s")
query = ' '.join(query)
cursor.execute(query, (user, host))
priv_string = ",".join([p for p in priv if p not in ('GRANT', 'REQUIRESSL')])
query = ["REVOKE %s ON %s" % (priv_string, db_table)]
query.append("FROM %s@%s")
query = ' '.join(query)
cursor.execute(query, (user, host))
def privileges_grant(cursor, user, host, db_table, priv):
# Escape '%' since mysql db.execute uses a format string and the
# specification of db and table often use a % (SQL wildcard)
db_table = db_table.replace('%', '%%')
priv_string = ",".join([p for p in priv if p not in ('GRANT', 'REQUIRESSL')])
query = ["GRANT %s ON %s" % (priv_string, db_table)]
query.append("TO %s@%s")
if 'REQUIRESSL' in priv:
query.append("REQUIRE SSL")
if 'GRANT' in priv:
query.append("WITH GRANT OPTION")
query = ' '.join(query)
cursor.execute(query, (user, host))
# ===========================================
# Module execution.
#
def main():
module = AnsibleModule(
argument_spec=dict(
login_user=dict(type='str'),
login_password=dict(type='str', no_log=True),
login_host=dict(type='str', default='localhost'),
login_port=dict(type='int', default=3306),
login_unix_socket=dict(type='str'),
user=dict(type='str', required=True, aliases=['name']),
password=dict(type='str', no_log=True),
encrypted=dict(type='bool', default=False),
host=dict(type='str', default='localhost'),
host_all=dict(type="bool", default=False),
state=dict(type='str', default='present', choices=['absent', 'present']),
priv=dict(type='str'),
append_privs=dict(type='bool', default=False),
check_implicit_admin=dict(type='bool', default=False),
update_password=dict(type='str', default='always', choices=['always', 'on_create']),
connect_timeout=dict(type='int', default=30),
config_file=dict(type='path', default='~/.my.cnf'),
sql_log_bin=dict(type='bool', default=True),
client_cert=dict(type='path', aliases=['ssl_cert']),
client_key=dict(type='path', aliases=['ssl_key']),
ca_cert=dict(type='path', aliases=['ssl_ca']),
plugin=dict(default=None, type='str'),
plugin_hash_string=dict(default=None, type='str'),
plugin_auth_string=dict(default=None, type='str'),
),
supports_check_mode=True,
)
login_user = module.params["login_user"]
login_password = module.params["login_password"]
user = module.params["user"]
password = module.params["password"]
encrypted = module.boolean(module.params["encrypted"])
host = module.params["host"].lower()
host_all = module.params["host_all"]
state = module.params["state"]
priv = module.params["priv"]
check_implicit_admin = module.params['check_implicit_admin']
connect_timeout = module.params['connect_timeout']
config_file = module.params['config_file']
append_privs = module.boolean(module.params["append_privs"])
update_password = module.params['update_password']
ssl_cert = module.params["client_cert"]
ssl_key = module.params["client_key"]
ssl_ca = module.params["ca_cert"]
db = ''
sql_log_bin = module.params["sql_log_bin"]
plugin = module.params["plugin"]
plugin_hash_string = module.params["plugin_hash_string"]
plugin_auth_string = module.params["plugin_auth_string"]
if mysql_driver is None:
module.fail_json(msg=mysql_driver_fail_msg)
cursor = None
try:
if check_implicit_admin:
try:
cursor, db_conn = mysql_connect(module, 'root', '', config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout)
except Exception:
pass
if not cursor:
cursor, db_conn = mysql_connect(module, login_user, login_password, config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout)
except Exception as e:
module.fail_json(msg="unable to connect to database, check login_user and login_password are correct or %s has the credentials. "
"Exception message: %s" % (config_file, to_native(e)))
if not sql_log_bin:
cursor.execute("SET SQL_LOG_BIN=0;")
if priv is not None:
try:
mode = get_mode(cursor)
except Exception as e:
module.fail_json(msg=to_native(e))
try:
priv = privileges_unpack(priv, mode)
except Exception as e:
module.fail_json(msg="invalid privileges string: %s" % to_native(e))
if state == "present":
if user_exists(cursor, user, host, host_all):
try:
if update_password == 'always':
changed, msg = user_mod(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string,
priv, append_privs, module)
else:
changed, msg = user_mod(cursor, user, host, host_all, None, encrypted,
plugin, plugin_hash_string, plugin_auth_string,
priv, append_privs, module)
except (SQLParseError, InvalidPrivsError, mysql_driver.Error) as e:
module.fail_json(msg=to_native(e))
else:
if host_all:
module.fail_json(msg="host_all parameter cannot be used when adding a user")
try:
changed = user_add(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string,
priv, module.check_mode)
if changed:
msg = "User added"
except (SQLParseError, InvalidPrivsError, mysql_driver.Error) as e:
module.fail_json(msg=to_native(e))
elif state == "absent":
if user_exists(cursor, user, host, host_all):
changed = user_delete(cursor, user, host, host_all, module.check_mode)
msg = "User deleted"
else:
changed = False
msg = "User doesn't exist"
module.exit_json(changed=changed, user=user, msg=msg)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,031 |
Azure VNet Peering disconnected state
|
##### SUMMARY
The playbook will create virtual network peering between local and remote virtual network utilizing two Azure profiles for authentication passed into each module during playbook execution. This works as intended however an unintended behavior when 1 of the peerings is deleted leaving the other peering in a disconnected state.
Either the playbook will execute and state ok even when the peering is in a disconnected state and then error out due to peering being in a disconnected state. If we re-order the module execution the playbook will error out for the reason mentioned above.
The enhancement would be to enforce peerings where if either the remote or local vnet is disconnected to delete and recreate.
##### ISSUE TYPE
Addittional functionality
##### COMPONENT NAME
Azure Module: azure_rm_virtualnetworkpeering
##### ADDITIONAL INFORMATION
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
serial: 1
connection: local
gather_facts: false
tasks:
- name: Create virtual network peering
azure_rm_virtualnetworkpeering:
auth_source: credential_file
profile: sub2
state: present
resource_group: rg1
virtual_network: 'vnet1id'
name: peer1
remote_virtual_network: 'vnet2id'
allow_virtual_network_access: true
allow_forwarded_traffic: false
- name: Create virtual network peering
azure_rm_virtualnetworkpeering:
profile: 'sub1'
auth_source: credential_file
state: present
resource_group: rg2
virtual_network: 'vnet2id'
name: peer2
remote_virtual_network: 'vnet1id'
allow_virtual_network_access: true
allow_forwarded_traffic: false
```
|
https://github.com/ansible/ansible/issues/66031
|
https://github.com/ansible/ansible/pull/66230
|
a1f6c611b7330d0505669638f9c7a19a33dfddc9
|
f1ec48429a6c64510fb4f74c52fbf92b0cc48df5
| 2019-12-23T06:37:57Z |
python
| 2020-01-25T06:01:17Z |
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py
|
#!/usr/bin/python
#
# Copyright (c) 2018 Yunge Zhu <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_virtualnetworkpeering
version_added: "2.8"
short_description: Manage Azure Virtual Network Peering
description:
- Create, update and delete Azure Virtual Network Peering.
options:
resource_group:
description:
- Name of a resource group where the vnet exists.
required: true
name:
description:
- Name of the virtual network peering.
required: true
virtual_network:
description:
- Name or resource ID of the virtual network to be peered.
required: true
remote_virtual_network:
description:
- Remote virtual network to be peered.
- It can be name of remote virtual network in same resource group.
- It can be remote virtual network resource ID.
- It can be a dict which contains I(name) and I(resource_group) of remote virtual network.
- Required when creating.
allow_virtual_network_access:
description:
- Allows VMs in the remote VNet to access all VMs in the local VNet.
type: bool
default: false
allow_forwarded_traffic:
description:
- Allows forwarded traffic from the VMs in the remote VNet.
type: bool
default: false
use_remote_gateways:
description:
- If remote gateways can be used on this virtual network.
type: bool
default: false
allow_gateway_transit:
description:
- Allows VNet to use the remote VNet's gateway. Remote VNet gateway must have --allow-gateway-transit enabled for remote peering.
- Only 1 peering can have this flag enabled. Cannot be set if the VNet already has a gateway.
type: bool
default: false
state:
description:
- State of the virtual network peering. Use C(present) to create or update a peering and C(absent) to delete it.
default: present
choices:
- absent
- present
extends_documentation_fragment:
- azure
author:
- Yunge Zhu (@yungezz)
'''
EXAMPLES = '''
- name: Create virtual network peering
azure_rm_virtualnetworkpeering:
resource_group: myResourceGroup
virtual_network: myVirtualNetwork
name: myPeering
remote_virtual_network:
resource_group: mySecondResourceGroup
name: myRemoteVirtualNetwork
allow_virtual_network_access: false
allow_forwarded_traffic: true
- name: Delete the virtual network peering
azure_rm_virtualnetworkpeering:
resource_group: myResourceGroup
virtual_network: myVirtualNetwork
name: myPeering
state: absent
'''
RETURN = '''
id:
description:
- ID of the Azure virtual network peering.
returned: always
type: str
sample: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVirtualN
etwork/virtualNetworkPeerings/myPeering"
'''
try:
from msrestazure.azure_exceptions import CloudError
from msrestazure.tools import is_valid_resource_id
from msrest.polling import LROPoller
except ImportError:
# This is handled in azure_rm_common
pass
from ansible.module_utils.azure_rm_common import AzureRMModuleBase, format_resource_id
def virtual_network_to_dict(vnet):
'''
Convert a virtual network object to a dict.
'''
results = dict(
id=vnet.id,
name=vnet.name,
location=vnet.location,
type=vnet.type,
tags=vnet.tags,
provisioning_state=vnet.provisioning_state,
etag=vnet.etag
)
if vnet.dhcp_options and len(vnet.dhcp_options.dns_servers) > 0:
results['dns_servers'] = []
for server in vnet.dhcp_options.dns_servers:
results['dns_servers'].append(server)
if vnet.address_space and len(vnet.address_space.address_prefixes) > 0:
results['address_prefixes'] = []
for space in vnet.address_space.address_prefixes:
results['address_prefixes'].append(space)
return results
def vnetpeering_to_dict(vnetpeering):
'''
Convert a virtual network peering object to a dict.
'''
results = dict(
id=vnetpeering.id,
name=vnetpeering.name,
remote_virtual_network=vnetpeering.remote_virtual_network.id,
remote_address_space=dict(
address_prefixes=vnetpeering.remote_address_space.address_prefixes
),
peering_state=vnetpeering.peering_state,
provisioning_state=vnetpeering.provisioning_state,
use_remote_gateways=vnetpeering.use_remote_gateways,
allow_gateway_transit=vnetpeering.allow_gateway_transit,
allow_forwarded_traffic=vnetpeering.allow_forwarded_traffic,
allow_virtual_network_access=vnetpeering.allow_virtual_network_access,
etag=vnetpeering.etag
)
return results
class AzureRMVirtualNetworkPeering(AzureRMModuleBase):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(
type='str',
required=True
),
name=dict(
type='str',
required=True
),
virtual_network=dict(
type='raw'
),
remote_virtual_network=dict(
type='raw'
),
allow_virtual_network_access=dict(
type='bool',
default=False
),
allow_forwarded_traffic=dict(
type='bool',
default=False
),
allow_gateway_transit=dict(
type='bool',
default=False
),
use_remote_gateways=dict(
type='bool',
default=False
),
state=dict(
type='str',
default='present',
choices=['present', 'absent']
)
)
self.resource_group = None
self.name = None
self.virtual_network = None
self.remote_virtual_network = None
self.allow_virtual_network_access = None
self.allow_forwarded_traffic = None
self.allow_gateway_transit = None
self.use_remote_gateways = None
self.results = dict(changed=False)
super(AzureRMVirtualNetworkPeering, self).__init__(derived_arg_spec=self.module_arg_spec,
supports_check_mode=True,
supports_tags=False)
def exec_module(self, **kwargs):
"""Main module execution method"""
for key in list(self.module_arg_spec.keys()):
setattr(self, key, kwargs[key])
to_be_updated = False
resource_group = self.get_resource_group(self.resource_group)
# parse virtual_network
self.virtual_network = self.parse_resource_to_dict(self.virtual_network)
if self.virtual_network['resource_group'] != self.resource_group:
self.fail('Resource group of virtual_network is not same as param resource_group')
# parse remote virtual_network
self.remote_virtual_network = self.format_vnet_id(self.remote_virtual_network)
# get vnet peering
response = self.get_vnet_peering()
if self.state == 'present':
if response:
# check vnet id not changed
existing_vnet = self.parse_resource_to_dict(response['id'])
if existing_vnet['resource_group'] != self.virtual_network['resource_group'] or \
existing_vnet['name'] != self.virtual_network['name']:
self.fail("Cannot update virtual_network of Virtual Network Peering!")
# check remote vnet id not changed
if response['remote_virtual_network'].lower() != self.remote_virtual_network.lower():
self.fail("Cannot update remote_virtual_network of Virtual Network Peering!")
# check if update
to_be_updated = self.check_update(response)
else:
# not exists, create new vnet peering
to_be_updated = True
# check if vnet exists
virtual_network = self.get_vnet(self.virtual_network['resource_group'], self.virtual_network['name'])
if not virtual_network:
self.fail("Virtual network {0} in resource group {1} does not exist!".format(
self.virtual_network['name'], self.virtual_network['resource_group']))
elif self.state == 'absent':
if response:
self.log('Delete Azure Virtual Network Peering')
self.results['changed'] = True
self.results['id'] = response['id']
if self.check_mode:
return self.results
response = self.delete_vnet_peering()
else:
self.fail("Azure Virtual Network Peering {0} does not exist in resource group {1}".format(self.name, self.resource_group))
if to_be_updated:
self.results['changed'] = True
if self.check_mode:
return self.results
response = self.create_or_update_vnet_peering()
self.results['id'] = response['id']
return self.results
def format_vnet_id(self, vnet):
if not vnet:
return vnet
if isinstance(vnet, dict) and vnet.get('name') and vnet.get('resource_group'):
remote_vnet_id = format_resource_id(vnet['name'],
self.subscription_id,
'Microsoft.Network',
'virtualNetworks',
vnet['resource_group'])
elif isinstance(vnet, str):
if is_valid_resource_id(vnet):
remote_vnet_id = vnet
else:
remote_vnet_id = format_resource_id(vnet,
self.subscription_id,
'Microsoft.Network',
'virtualNetworks',
self.resource_group)
else:
self.fail("remote_virtual_network could be a valid resource id, dict of name and resource_group, name of virtual network in same resource group.")
return remote_vnet_id
def check_update(self, exisiting_vnet_peering):
if self.allow_forwarded_traffic != exisiting_vnet_peering['allow_forwarded_traffic']:
return True
if self.allow_gateway_transit != exisiting_vnet_peering['allow_gateway_transit']:
return True
if self.allow_virtual_network_access != exisiting_vnet_peering['allow_virtual_network_access']:
return True
if self.use_remote_gateways != exisiting_vnet_peering['use_remote_gateways']:
return True
return False
def get_vnet(self, resource_group, vnet_name):
'''
Get Azure Virtual Network
:return: deserialized Azure Virtual Network
'''
self.log("Get the Azure Virtual Network {0}".format(vnet_name))
vnet = self.network_client.virtual_networks.get(resource_group, vnet_name)
if vnet:
results = virtual_network_to_dict(vnet)
return results
return False
def create_or_update_vnet_peering(self):
'''
Creates or Update Azure Virtual Network Peering.
:return: deserialized Azure Virtual Network Peering instance state dictionary
'''
self.log("Creating or Updating the Azure Virtual Network Peering {0}".format(self.name))
vnet_id = format_resource_id(self.virtual_network['name'],
self.subscription_id,
'Microsoft.Network',
'virtualNetworks',
self.virtual_network['resource_group'])
peering = self.network_models.VirtualNetworkPeering(
id=vnet_id,
name=self.name,
remote_virtual_network=self.network_models.SubResource(id=self.remote_virtual_network),
allow_virtual_network_access=self.allow_virtual_network_access,
allow_gateway_transit=self.allow_gateway_transit,
allow_forwarded_traffic=self.allow_forwarded_traffic,
use_remote_gateways=self.use_remote_gateways)
try:
response = self.network_client.virtual_network_peerings.create_or_update(self.resource_group,
self.virtual_network['name'],
self.name,
peering)
if isinstance(response, LROPoller):
response = self.get_poller_result(response)
return vnetpeering_to_dict(response)
except CloudError as exc:
self.fail("Error creating Azure Virtual Network Peering: {0}.".format(exc.message))
def delete_vnet_peering(self):
'''
Deletes the specified Azure Virtual Network Peering
:return: True
'''
self.log("Deleting Azure Virtual Network Peering {0}".format(self.name))
try:
poller = self.network_client.virtual_network_peerings.delete(
self.resource_group, self.virtual_network['name'], self.name)
self.get_poller_result(poller)
return True
except CloudError as e:
self.fail("Error deleting the Azure Virtual Network Peering: {0}".format(e.message))
return False
def get_vnet_peering(self):
'''
Gets the Virtual Network Peering.
:return: deserialized Virtual Network Peering
'''
self.log(
"Checking if Virtual Network Peering {0} is present".format(self.name))
try:
response = self.network_client.virtual_network_peerings.get(self.resource_group,
self.virtual_network['name'],
self.name)
self.log("Response : {0}".format(response))
return vnetpeering_to_dict(response)
except CloudError:
self.log('Did not find the Virtual Network Peering.')
return False
def main():
"""Main execution"""
AzureRMVirtualNetworkPeering()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,720 |
Linked Templates not updated with xml/json templates
|
##### SUMMARY
If you import a template with a xml/json file the linked templates of a templates ( so the child templates not the root ones ) aren't processed if you want to remove linked templates.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_templates
##### ANSIBLE VERSION
```shell
ansible 2.9.2
config file = /mnt/d/Code/gitlab/zabbix-servers/ansible/ansible.cfg
configured module search path = ['/home/fism/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /mnt/d/Code/gitlab/zabbix-servers/venv/lib/python3.6/site-packages/ansible
executable location = /mnt/d/Code/gitlab/zabbix-servers/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
Zabbix 4.4.4 on RHEL7
##### STEPS TO REPRODUCE
Export a template (no matter if with Zabbix itself or ansible). Change the corresponding linked template section of a template and reimport it.
Exported with Zabbix:
```yaml
<zabbix_export>
<version>4.4</version>
<date>2020-01-23T13:25:32Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>fisma</template>
<name>fisma</name>
<templates>
<template>
<name>fismb</name>
</template>
</templates>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
</template>
</templates>
</zabbix_export>
```
Change to this with removing all linked templates ( fismb in above example ).
```yaml
<zabbix_export>
<version>4.4</version>
<date>2020-01-23T13:25:32Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>fisma</template>
<name>fisma</name>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
</template>
</templates>
</zabbix_export>
```
##### EXPECTED RESULTS
fismb should be unlinked from fisma.
##### ACTUAL RESULTS
fismb is still linked with fisma.
|
https://github.com/ansible/ansible/issues/66720
|
https://github.com/ansible/ansible/pull/66747
|
99d7f150873011e7515851db9b44ff486efa9d77
|
055cf91d026c237ee71f30e22f4139313e4f5204
| 2020-01-23T13:41:07Z |
python
| 2020-01-27T14:20:45Z |
changelogs/fragments/66747-zabbix_template-newupdaterule-deletemissinglinkedtemplate.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,720 |
Linked Templates not updated with xml/json templates
|
##### SUMMARY
If you import a template with a xml/json file the linked templates of a templates ( so the child templates not the root ones ) aren't processed if you want to remove linked templates.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_templates
##### ANSIBLE VERSION
```shell
ansible 2.9.2
config file = /mnt/d/Code/gitlab/zabbix-servers/ansible/ansible.cfg
configured module search path = ['/home/fism/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /mnt/d/Code/gitlab/zabbix-servers/venv/lib/python3.6/site-packages/ansible
executable location = /mnt/d/Code/gitlab/zabbix-servers/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
Zabbix 4.4.4 on RHEL7
##### STEPS TO REPRODUCE
Export a template (no matter if with Zabbix itself or ansible). Change the corresponding linked template section of a template and reimport it.
Exported with Zabbix:
```yaml
<zabbix_export>
<version>4.4</version>
<date>2020-01-23T13:25:32Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>fisma</template>
<name>fisma</name>
<templates>
<template>
<name>fismb</name>
</template>
</templates>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
</template>
</templates>
</zabbix_export>
```
Change to this with removing all linked templates ( fismb in above example ).
```yaml
<zabbix_export>
<version>4.4</version>
<date>2020-01-23T13:25:32Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>fisma</template>
<name>fisma</name>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
</template>
</templates>
</zabbix_export>
```
##### EXPECTED RESULTS
fismb should be unlinked from fisma.
##### ACTUAL RESULTS
fismb is still linked with fisma.
|
https://github.com/ansible/ansible/issues/66720
|
https://github.com/ansible/ansible/pull/66747
|
99d7f150873011e7515851db9b44ff486efa9d77
|
055cf91d026c237ee71f30e22f4139313e4f5204
| 2020-01-23T13:41:07Z |
python
| 2020-01-27T14:20:45Z |
lib/ansible/modules/monitoring/zabbix/zabbix_template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, sookido
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: zabbix_template
short_description: Create/update/delete/dump Zabbix template
description:
- This module allows you to create, modify, delete and dump Zabbix templates.
- Multiple templates can be created or modified at once if passing JSON or XML to module.
version_added: "2.5"
author:
- "sookido (@sookido)"
- "Logan Vig (@logan2211)"
- "Dusan Matejka (@D3DeFi)"
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.4"
options:
template_name:
description:
- Name of Zabbix template.
- Required when I(template_json) or I(template_xml) are not used.
- Mutually exclusive with I(template_json) and I(template_xml).
required: false
type: str
template_json:
description:
- JSON dump of templates to import.
- Multiple templates can be imported this way.
- Mutually exclusive with I(template_name) and I(template_xml).
required: false
type: json
template_xml:
description:
- XML dump of templates to import.
- Multiple templates can be imported this way.
- You are advised to pass XML structure matching the structure used by your version of Zabbix server.
- Custom XML structure can be imported as long as it is valid, but may not yield consistent idempotent
results on subsequent runs.
- Mutually exclusive with I(template_name) and I(template_json).
required: false
version_added: '2.9'
type: str
template_groups:
description:
- List of host groups to add template to when template is created.
- Replaces the current host groups the template belongs to if the template is already present.
- Required when creating a new template with C(state=present) and I(template_name) is used.
Not required when updating an existing template.
required: false
type: list
elements: str
link_templates:
description:
- List of template names to be linked to the template.
- Templates that are not specified and are linked to the existing template will be only unlinked and not
cleared from the template.
required: false
type: list
elements: str
clear_templates:
description:
- List of template names to be unlinked and cleared from the template.
- This option is ignored if template is being created for the first time.
required: false
type: list
elements: str
macros:
description:
- List of user macros to create for the template.
- Macros that are not specified and are present on the existing template will be replaced.
- See examples on how to pass macros.
required: false
type: list
elements: dict
suboptions:
name:
description:
- Name of the macro.
- Must be specified in {$NAME} format.
type: str
value:
description:
- Value of the macro.
type: str
dump_format:
description:
- Format to use when dumping template with C(state=dump).
- This option is deprecated and will eventually be removed in 2.14.
required: false
choices: [json, xml]
default: "json"
version_added: '2.9'
type: str
state:
description:
- Required state of the template.
- On C(state=present) template will be created/imported or updated depending if it is already present.
- On C(state=dump) template content will get dumped into required format specified in I(dump_format).
- On C(state=absent) template will be deleted.
- The C(state=dump) is deprecated and will eventually be removed in 2.14. The M(zabbix_template_info) module should be used instead.
required: false
choices: [present, absent, dump]
default: "present"
type: str
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = r'''
---
- name: Create a new Zabbix template linked to groups, macros and templates
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: ExampleHost
template_groups:
- Role
- Role2
link_templates:
- Example template1
- Example template2
macros:
- macro: '{$EXAMPLE_MACRO1}'
value: 30000
- macro: '{$EXAMPLE_MACRO2}'
value: 3
- macro: '{$EXAMPLE_MACRO3}'
value: 'Example'
state: present
- name: Unlink and clear templates from the existing Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: ExampleHost
clear_templates:
- Example template3
- Example template4
state: present
- name: Import Zabbix templates from JSON
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_json: "{{ lookup('file', 'zabbix_apache2.json') }}"
state: present
- name: Import Zabbix templates from XML
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_xml: "{{ lookup('file', 'zabbix_apache2.json') }}"
state: present
- name: Import Zabbix template from Ansible dict variable
zabbix_template:
login_user: username
login_password: password
server_url: http://127.0.0.1
template_json:
zabbix_export:
version: '3.2'
templates:
- name: Template for Testing
description: 'Testing template import'
template: Test Template
groups:
- name: Templates
applications:
- name: Test Application
state: present
- name: Configure macros on the existing Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
macros:
- macro: '{$TEST_MACRO}'
value: 'Example'
state: present
- name: Delete Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
state: absent
- name: Dump Zabbix template as JSON
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
state: dump
register: template_dump
- name: Dump Zabbix template as XML
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
dump_format: xml
state: dump
register: template_dump
'''
RETURN = r'''
---
template_json:
description: The JSON dump of the template
returned: when state is dump
type: str
sample: {
"zabbix_export":{
"date":"2017-11-29T16:37:24Z",
"templates":[{
"templates":[],
"description":"",
"httptests":[],
"screens":[],
"applications":[],
"discovery_rules":[],
"groups":[{"name":"Templates"}],
"name":"Test Template",
"items":[],
"macros":[],
"template":"test"
}],
"version":"3.2",
"groups":[{
"name":"Templates"
}]
}
}
template_xml:
description: dump of the template in XML representation
returned: when state is dump and dump_format is xml
type: str
sample: |-
<?xml version="1.0" ?>
<zabbix_export>
<version>4.2</version>
<date>2019-07-12T13:37:26Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>test</template>
<name>Test Template</name>
<description/>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<applications/>
<items/>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
<tags/>
</template>
</templates>
</zabbix_export>
'''
import atexit
import json
import traceback
import xml.etree.ElementTree as ET
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
from zabbix_api import ZabbixAPI, ZabbixAPIException
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
class Template(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" %
group_name)
return True
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
group_ids = []
if group_names is None or len(group_names) == 0:
return group_ids
if self.check_host_group_exist(group_names):
group_list = self._zapi.hostgroup.get(
{'output': 'extend',
'filter': {'name': group_names}})
for group in group_list:
group_id = group['groupid']
group_ids.append({'groupid': group_id})
return group_ids
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get(
{'output': 'extend',
'filter': {'host': template}})
if len(template_list) < 1:
continue
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_template(self, template_name, group_ids, link_template_ids, macros):
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.template.create({'host': template_name, 'groups': group_ids, 'templates': link_template_ids,
'macros': macros})
def check_template_changed(self, template_ids, template_groups, link_templates, clear_templates,
template_macros, template_content, template_type):
"""Compares template parameters to already existing values if any are found.
template_json - JSON structures are compared as deep sorted dictionaries,
template_xml - XML structures are compared as strings, but filtered and formatted first,
If none above is used, all the other arguments are compared to their existing counterparts
retrieved from Zabbix API."""
changed = False
# Compare filtered and formatted XMLs strings for any changes. It is expected that provided
# XML has same structure as Zabbix uses (e.g. it was optimally exported via Zabbix GUI or API)
if template_content is not None and template_type == 'xml':
existing_template = self.dump_template(template_ids, template_type='xml')
if self.filter_xml_template(template_content) != self.filter_xml_template(existing_template):
changed = True
return changed
existing_template = self.dump_template(template_ids, template_type='json')
# Compare JSON objects as deep sorted python dictionaries
if template_content is not None and template_type == 'json':
parsed_template_json = self.load_json_template(template_content)
if self.diff_template(parsed_template_json, existing_template):
changed = True
return changed
# If neither template_json or template_xml were used, user provided all parameters via module options
if template_groups is not None:
existing_groups = [g['name'] for g in existing_template['zabbix_export']['groups']]
if set(template_groups) != set(existing_groups):
changed = True
if 'templates' not in existing_template['zabbix_export']['templates'][0]:
existing_template['zabbix_export']['templates'][0]['templates'] = []
# Check if any new templates would be linked or any existing would be unlinked
exist_child_templates = [t['name'] for t in existing_template['zabbix_export']['templates'][0]['templates']]
if link_templates is not None:
if set(link_templates) != set(exist_child_templates):
changed = True
else:
if set([]) != set(exist_child_templates):
changed = True
# Mark that there will be changes when at least one existing template will be unlinked
if clear_templates is not None:
for t in clear_templates:
if t in exist_child_templates:
changed = True
break
if template_macros is not None:
existing_macros = existing_template['zabbix_export']['templates'][0]['macros']
if template_macros != existing_macros:
changed = True
return changed
def update_template(self, template_ids, group_ids, link_template_ids, clear_template_ids, template_macros):
template_changes = {}
if group_ids is not None:
template_changes.update({'groups': group_ids})
if link_template_ids is not None:
template_changes.update({'templates': link_template_ids})
else:
template_changes.update({'templates': []})
if clear_template_ids is not None:
template_changes.update({'templates_clear': clear_template_ids})
if template_macros is not None:
template_changes.update({'macros': template_macros})
if template_changes:
# If we got here we know that only one template was provided via template_name
template_changes.update({'templateid': template_ids[0]})
self._zapi.template.update(template_changes)
def delete_template(self, templateids):
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.template.delete(templateids)
def ordered_json(self, obj):
# Deep sort json dicts for comparison
if isinstance(obj, dict):
return sorted((k, self.ordered_json(v)) for k, v in obj.items())
if isinstance(obj, list):
return sorted(self.ordered_json(x) for x in obj)
else:
return obj
def dump_template(self, template_ids, template_type='json'):
if self._module.check_mode:
self._module.exit_json(changed=True)
try:
dump = self._zapi.configuration.export({'format': template_type, 'options': {'templates': template_ids}})
if template_type == 'xml':
return str(ET.tostring(ET.fromstring(dump.encode('utf-8')), encoding='utf-8').decode('utf-8'))
else:
return self.load_json_template(dump)
except ZabbixAPIException as e:
self._module.fail_json(msg='Unable to export template: %s' % e)
def diff_template(self, template_json_a, template_json_b):
# Compare 2 zabbix templates and return True if they differ.
template_json_a = self.filter_template(template_json_a)
template_json_b = self.filter_template(template_json_b)
if self.ordered_json(template_json_a) == self.ordered_json(template_json_b):
return False
return True
def filter_template(self, template_json):
# Filter the template json to contain only the keys we will update
keep_keys = set(['graphs', 'templates', 'triggers', 'value_maps'])
unwanted_keys = set(template_json['zabbix_export']) - keep_keys
for unwanted_key in unwanted_keys:
del template_json['zabbix_export'][unwanted_key]
# Versions older than 2.4 do not support description field within template
desc_not_supported = False
if LooseVersion(self._zapi.api_version()).version[:2] < LooseVersion('2.4').version:
desc_not_supported = True
# Filter empty attributes from template object to allow accurate comparison
for template in template_json['zabbix_export']['templates']:
for key in list(template.keys()):
if not template[key] or (key == 'description' and desc_not_supported):
template.pop(key)
return template_json
def filter_xml_template(self, template_xml):
"""Filters out keys from XML template that may wary between exports (e.g date or version) and
keys that are not imported via this module.
It is advised that provided XML template exactly matches XML structure used by Zabbix"""
# Strip last new line and convert string to ElementTree
parsed_xml_root = self.load_xml_template(template_xml.strip())
keep_keys = ['graphs', 'templates', 'triggers', 'value_maps']
# Remove unwanted XML nodes
for node in list(parsed_xml_root):
if node.tag not in keep_keys:
parsed_xml_root.remove(node)
# Filter empty attributes from template objects to allow accurate comparison
for template in list(parsed_xml_root.find('templates')):
for element in list(template):
if element.text is None and len(list(element)) == 0:
template.remove(element)
# Filter new lines and indentation
xml_root_text = list(line.strip() for line in ET.tostring(parsed_xml_root, encoding='utf8', method='xml').decode().split('\n'))
return ''.join(xml_root_text)
def load_json_template(self, template_json):
try:
return json.loads(template_json)
except ValueError as e:
self._module.fail_json(msg='Invalid JSON provided', details=to_native(e), exception=traceback.format_exc())
def load_xml_template(self, template_xml):
try:
return ET.fromstring(template_xml)
except ET.ParseError as e:
self._module.fail_json(msg='Invalid XML provided', details=to_native(e), exception=traceback.format_exc())
def import_template(self, template_content, template_type='json'):
# rules schema latest version
update_rules = {
'applications': {
'createMissing': True,
'deleteMissing': True
},
'discoveryRules': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'graphs': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'groups': {
'createMissing': True
},
'httptests': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'items': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'templates': {
'createMissing': True,
'updateExisting': True
},
'templateLinkage': {
'createMissing': True
},
'templateScreens': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'triggers': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'valueMaps': {
'createMissing': True,
'updateExisting': True
}
}
try:
# old api version support here
api_version = self._zapi.api_version()
# updateExisting for application removed from zabbix api after 3.2
if LooseVersion(api_version).version[:2] <= LooseVersion('3.2').version:
update_rules['applications']['updateExisting'] = True
import_data = {'format': template_type, 'source': template_content, 'rules': update_rules}
self._zapi.configuration.import_(import_data)
except ZabbixAPIException as e:
self._module.fail_json(msg='Unable to import template', details=to_native(e),
exception=traceback.format_exc())
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
template_name=dict(type='str', required=False),
template_json=dict(type='json', required=False),
template_xml=dict(type='str', required=False),
template_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
clear_templates=dict(type='list', required=False),
macros=dict(type='list', required=False),
dump_format=dict(type='str', required=False, default='json', choices=['json', 'xml']),
state=dict(type='str', default="present", choices=['present', 'absent', 'dump']),
timeout=dict(type='int', default=10)
),
required_one_of=[
['template_name', 'template_json', 'template_xml']
],
mutually_exclusive=[
['template_name', 'template_json', 'template_xml']
],
required_if=[
['state', 'absent', ['template_name']],
['state', 'dump', ['template_name']]
],
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
template_name = module.params['template_name']
template_json = module.params['template_json']
template_xml = module.params['template_xml']
template_groups = module.params['template_groups']
link_templates = module.params['link_templates']
clear_templates = module.params['clear_templates']
template_macros = module.params['macros']
dump_format = module.params['dump_format']
state = module.params['state']
timeout = module.params['timeout']
zbx = None
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except ZabbixAPIException as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
template = Template(module, zbx)
# Identify template names for IDs retrieval
# Template names are expected to reside in ['zabbix_export']['templates'][*]['template'] for both data types
template_content, template_type = None, None
if template_json is not None:
template_type = 'json'
template_content = template_json
json_parsed = template.load_json_template(template_content)
template_names = list(t['template'] for t in json_parsed['zabbix_export']['templates'])
elif template_xml is not None:
template_type = 'xml'
template_content = template_xml
xml_parsed = template.load_xml_template(template_content)
template_names = list(t.find('template').text for t in list(xml_parsed.find('templates')))
else:
template_names = [template_name]
template_ids = template.get_template_ids(template_names)
if state == "absent":
if not template_ids:
module.exit_json(changed=False, msg="Template not found. No changed: %s" % template_name)
template.delete_template(template_ids)
module.exit_json(changed=True, result="Successfully deleted template %s" % template_name)
elif state == "dump":
module.deprecate("The 'dump' state has been deprecated and will be removed, use 'zabbix_template_info' module instead.", version='2.14')
if not template_ids:
module.fail_json(msg='Template not found: %s' % template_name)
if dump_format == 'json':
module.exit_json(changed=False, template_json=template.dump_template(template_ids, template_type='json'))
elif dump_format == 'xml':
module.exit_json(changed=False, template_xml=template.dump_template(template_ids, template_type='xml'))
elif state == "present":
# Load all subelements for template that were provided by user
group_ids = None
if template_groups is not None:
group_ids = template.get_group_ids_by_group_names(template_groups)
link_template_ids = None
if link_templates is not None:
link_template_ids = template.get_template_ids(link_templates)
clear_template_ids = None
if clear_templates is not None:
clear_template_ids = template.get_template_ids(clear_templates)
if template_macros is not None:
# Zabbix configuration.export does not differentiate python types (numbers are returned as strings)
for macroitem in template_macros:
for key in macroitem:
macroitem[key] = str(macroitem[key])
if not template_ids:
# Assume new templates are being added when no ID's were found
if template_content is not None:
template.import_template(template_content, template_type)
module.exit_json(changed=True, result="Template import successful")
else:
if group_ids is None:
module.fail_json(msg='template_groups are required when creating a new Zabbix template')
template.add_template(template_name, group_ids, link_template_ids, template_macros)
module.exit_json(changed=True, result="Successfully added template: %s" % template_name)
else:
changed = template.check_template_changed(template_ids, template_groups, link_templates, clear_templates,
template_macros, template_content, template_type)
if module.check_mode:
module.exit_json(changed=changed)
if changed:
if template_type is not None:
template.import_template(template_content, template_type)
else:
template.update_template(template_ids, group_ids, link_template_ids, clear_template_ids,
template_macros)
module.exit_json(changed=changed, result="Template successfully updated")
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,464 |
custom lookup plugin found with ansible-doc but not found in playbook
|
##### SUMMARY
Custom lookup plugin appears correctly configured using ```ansible-doc -t lookup listFolders``` however always results in ```FAILED! => {"msg": "lookup plugin (listFolders) not found"}``` when used in playbook.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- Custom lookup plugin
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /Users/xxxx/.ansible.cfg
configured module search path = ['/Users/xxxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.2_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:05:45) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
```
defaults only
```
##### OS / ENVIRONMENT
- macOS Mojave 10.14.6
- zsh 5.3 (x86_64-apple-darwin18.0)
##### STEPS TO REPRODUCE
Here is a trivial custom lookup plugin that just contains documentation to test my configuration.
It is placed in /Users/xxxx/.ansible/plugins/lookup/listFolders.py
```
DOCUMENTATION = """
lookup: listFolders
short_description: returns list from yaml descriptor
description:
- This lookup returns a list from the yaml descriptor
"""
```
This command correctly pics up the lookup plugin:
```ansible-doc -t lookup listFolders```
And shows the output:
```
> LISTFOLDERS (/Users/xxxx/.ansible/plugins/lookup/listFolders.py)
This lookup returns a list from the yaml descriptor
* This module is maintained by The Ansible Community
METADATA:
status:
- preview
supported_by: community
```
However in a playbook I always hit this error:
```fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}```
Here is a sample playbook that produces the error:
```
---
- name: "Test custom lookup plugin"
hosts: localhost
tasks:
- name: lookup listFolders
debug:
msg: '{{ lookup("listFolders") }}'
```
##### EXPECTED RESULTS
```
TASK [lookup listFolders] ***************************************************************************************************************************************************************
ok: [localhost]
```
##### ACTUAL RESULTS
```
TASK [lookup listFolders] ***************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}
```
<!--- Paste verbatim command output between quotes -->
```
> ansible-playbook testplaybook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test custom lookup plugin] ************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
ok: [localhost]
TASK [lookup listFolders] ***************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}
PLAY RECAP ******************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66464
|
https://github.com/ansible/ansible/pull/66521
|
3ce644051543d5750b1eb03bf92556e5f243bea2
|
4ca0c7f11676f62ba5298abf2e506f1e953767da
| 2020-01-14T08:35:18Z |
python
| 2020-01-27T20:09:45Z |
changelogs/fragments/66464-lookup-case-sensitivity-fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,464 |
custom lookup plugin found with ansible-doc but not found in playbook
|
##### SUMMARY
Custom lookup plugin appears correctly configured using ```ansible-doc -t lookup listFolders``` however always results in ```FAILED! => {"msg": "lookup plugin (listFolders) not found"}``` when used in playbook.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- Custom lookup plugin
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /Users/xxxx/.ansible.cfg
configured module search path = ['/Users/xxxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.2_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:05:45) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
```
defaults only
```
##### OS / ENVIRONMENT
- macOS Mojave 10.14.6
- zsh 5.3 (x86_64-apple-darwin18.0)
##### STEPS TO REPRODUCE
Here is a trivial custom lookup plugin that just contains documentation to test my configuration.
It is placed in /Users/xxxx/.ansible/plugins/lookup/listFolders.py
```
DOCUMENTATION = """
lookup: listFolders
short_description: returns list from yaml descriptor
description:
- This lookup returns a list from the yaml descriptor
"""
```
This command correctly pics up the lookup plugin:
```ansible-doc -t lookup listFolders```
And shows the output:
```
> LISTFOLDERS (/Users/xxxx/.ansible/plugins/lookup/listFolders.py)
This lookup returns a list from the yaml descriptor
* This module is maintained by The Ansible Community
METADATA:
status:
- preview
supported_by: community
```
However in a playbook I always hit this error:
```fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}```
Here is a sample playbook that produces the error:
```
---
- name: "Test custom lookup plugin"
hosts: localhost
tasks:
- name: lookup listFolders
debug:
msg: '{{ lookup("listFolders") }}'
```
##### EXPECTED RESULTS
```
TASK [lookup listFolders] ***************************************************************************************************************************************************************
ok: [localhost]
```
##### ACTUAL RESULTS
```
TASK [lookup listFolders] ***************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}
```
<!--- Paste verbatim command output between quotes -->
```
> ansible-playbook testplaybook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test custom lookup plugin] ************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
ok: [localhost]
TASK [lookup listFolders] ***************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}
PLAY RECAP ******************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66464
|
https://github.com/ansible/ansible/pull/66521
|
3ce644051543d5750b1eb03bf92556e5f243bea2
|
4ca0c7f11676f62ba5298abf2e506f1e953767da
| 2020-01-14T08:35:18Z |
python
| 2020-01-27T20:09:45Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
Plugins
=======
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,464 |
custom lookup plugin found with ansible-doc but not found in playbook
|
##### SUMMARY
Custom lookup plugin appears correctly configured using ```ansible-doc -t lookup listFolders``` however always results in ```FAILED! => {"msg": "lookup plugin (listFolders) not found"}``` when used in playbook.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- Custom lookup plugin
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /Users/xxxx/.ansible.cfg
configured module search path = ['/Users/xxxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.2_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:05:45) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
```
defaults only
```
##### OS / ENVIRONMENT
- macOS Mojave 10.14.6
- zsh 5.3 (x86_64-apple-darwin18.0)
##### STEPS TO REPRODUCE
Here is a trivial custom lookup plugin that just contains documentation to test my configuration.
It is placed in /Users/xxxx/.ansible/plugins/lookup/listFolders.py
```
DOCUMENTATION = """
lookup: listFolders
short_description: returns list from yaml descriptor
description:
- This lookup returns a list from the yaml descriptor
"""
```
This command correctly pics up the lookup plugin:
```ansible-doc -t lookup listFolders```
And shows the output:
```
> LISTFOLDERS (/Users/xxxx/.ansible/plugins/lookup/listFolders.py)
This lookup returns a list from the yaml descriptor
* This module is maintained by The Ansible Community
METADATA:
status:
- preview
supported_by: community
```
However in a playbook I always hit this error:
```fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}```
Here is a sample playbook that produces the error:
```
---
- name: "Test custom lookup plugin"
hosts: localhost
tasks:
- name: lookup listFolders
debug:
msg: '{{ lookup("listFolders") }}'
```
##### EXPECTED RESULTS
```
TASK [lookup listFolders] ***************************************************************************************************************************************************************
ok: [localhost]
```
##### ACTUAL RESULTS
```
TASK [lookup listFolders] ***************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}
```
<!--- Paste verbatim command output between quotes -->
```
> ansible-playbook testplaybook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test custom lookup plugin] ************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
ok: [localhost]
TASK [lookup listFolders] ***************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "lookup plugin (listFolders) not found"}
PLAY RECAP ******************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66464
|
https://github.com/ansible/ansible/pull/66521
|
3ce644051543d5750b1eb03bf92556e5f243bea2
|
4ca0c7f11676f62ba5298abf2e506f1e953767da
| 2020-01-14T08:35:18Z |
python
| 2020-01-27T20:09:45Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from contextlib import contextmanager
from numbers import Number
try:
from hashlib import sha1
except ImportError:
from sha import sha as sha1
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils.six import iteritems, string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common._collections_compat import Sequence, Mapping, MutableMapping
from ansible.module_utils.common.collections import is_sequence
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.safe_eval import safe_eval
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import wrap_var
# HACK: keep Python 2.6 controller tests happy in CI until they're properly split
try:
from importlib import import_module
except ImportError:
import_module = __import__
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# A regex for checking to see if a variable we're trying to
# expand is just a single variable name.
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
USE_JINJA2_NATIVE = False
if C.DEFAULT_JINJA2_NATIVE:
try:
from jinja2.nativetypes import NativeEnvironment as Environment
from ansible.template.native_helpers import ansible_native_concat as j2_concat
USE_JINJA2_NATIVE = True
except ImportError:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
from jinja2 import __version__ as j2_version
display.warning(
'jinja2_native requires Jinja 2.10 and above. '
'Version detected: %s. Falling back to default.' % j2_version
)
else:
from jinja2 import Environment
from jinja2.utils import concat as j2_concat
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
def generate_ansible_template_vars(path, dest_path=None):
b_path = to_bytes(path)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_fullpath': os.path.abspath(path),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# This wraps a lot of code, but this is due to lex returing a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined'
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in value, delegate to base dict
return self._delegatee.__getitem__(key)
func = self._collection_jinja_func_cache.get(key)
if func:
return func
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
# FIXME: error handling for bogus plugin name, bogus impl, bogus filter/test
pkg = import_module(acr.n_python_package_name)
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
plugin_impl = self._pluginloader.get(module_name)
method_map = getattr(plugin_impl, self._method_map_name)
for f in iteritems(method_map()):
fq_name = '.'.join((parent_prefix, f[0]))
# FIXME: detect/warn on intra-collection function name collisions
self._collection_jinja_func_cache[fq_name] = f[1]
function_impl = self._collection_jinja_func_cache[key]
return function_impl
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
class AnsibleEnvironment(Environment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
def __init__(self, *args, **kwargs):
super(AnsibleEnvironment, self).__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
variables = {} if variables is None else variables
self._loader = loader
self._filters = None
self._tests = None
self._available_variables = variables
self._cached_result = {}
if loader:
self._basedir = loader.get_basedir()
else:
self._basedir = './'
if shared_loader_obj:
self._filter_loader = getattr(shared_loader_obj, 'filter_loader')
self._test_loader = getattr(shared_loader_obj, 'test_loader')
self._lookup_loader = getattr(shared_loader_obj, 'lookup_loader')
else:
self._filter_loader = filter_loader
self._test_loader = test_loader
self._lookup_loader = lookup_loader
# flags to determine whether certain failures during templating
# should result in fatal errors being raised
self._fail_on_lookup_errors = True
self._fail_on_filter_errors = True
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
self.environment = AnsibleEnvironment(
trim_blocks=True,
undefined=AnsibleUndefined,
extensions=self._get_extensions(),
finalize=self._finalize,
loader=FileSystemLoader(self._basedir),
)
# the current rendering context under which the templar class is working
self.cur_context = None
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self._clean_regex = re.compile(r'(?:%s|%s|%s|%s)' % (
self.environment.variable_start_string,
self.environment.block_start_string,
self.environment.block_end_string,
self.environment.variable_end_string
))
self._no_type_regex = re.compile(r'.*?\|\s*(?:%s)(?:\([^\|]*\))?\s*\)?\s*(?:%s)' %
('|'.join(C.STRING_TYPE_FILTERS), self.environment.variable_end_string))
def _get_filters(self):
'''
Returns filter plugins, after loading and caching them if need be
'''
if self._filters is not None:
return self._filters.copy()
self._filters = dict()
for fp in self._filter_loader.all():
self._filters.update(fp.filters())
return self._filters.copy()
def _get_tests(self):
'''
Returns tests plugins, after loading and caching them if need be
'''
if self._tests is not None:
return self._tests.copy()
self._tests = dict()
for fp in self._test_loader.all():
self._tests.update(fp.tests())
return self._tests.copy()
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
def set_available_variables(self, variables):
display.deprecated(
'set_available_variables is being deprecated. Use "@available_variables.setter" instead.',
version='2.13'
)
self.available_variables = variables
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs, lstrip_blocks was added in jinja2==2.7
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [''] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
result = variable
if self.is_possibly_template(variable):
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if cache and sha1_hash in self._cached_result:
result = self._cached_result[sha1_hash]
else:
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
if not USE_JINJA2_NATIVE:
unsafe = hasattr(result, '__UNSAFE__')
if convert_data and not self._no_type_regex.match(variable):
# if this looks like a dictionary or list, convert it to such using the safe_eval method
if (result.startswith("{") and not result.startswith(self.environment.variable_start_string)) or \
result.startswith("[") or result in ("True", "False"):
eval_results = safe_eval(result, include_exceptions=True)
if eval_results[1] is None:
result = eval_results[0]
if unsafe:
result = wrap_var(result)
else:
# FIXME: if the safe_eval raised an error, should we do something with it?
pass
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
except AnsibleFilterError:
if self._fail_on_filter_errors:
raise
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
'''Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
'''
env = self.environment
if isinstance(data, string_types):
for marker in (env.block_start_string, env.variable_start_string, env.comment_start_string):
if marker in data:
return True
return False
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _finalize(self, thing):
'''
A custom finalize method for jinja2, which prevents None from being returned. This
avoids a string of ``"None"`` as ``None`` has no importance in YAML.
If using ANSIBLE_JINJA2_NATIVE we bypass this and return the actual value always
'''
if USE_JINJA2_NATIVE:
return thing
return thing if thing is not None else ''
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = self._lookup_loader.get(name.lower(), loader=self._loader, templar=self)
if instance is not None:
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
from ansible.utils.listify import listify_lookup_plugin_terms
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except Exception as e:
if self._fail_on_lookup_errors:
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise AnsibleError(to_native(msg))
ran = [] if wantlist else None
if ran and not allow_unsafe:
if wantlist:
ran = wrap_var(ran)
else:
try:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
if self.cur_context:
self.cur_context.unsafe = True
return ran
else:
raise AnsibleError("lookup plugin (%s) not found" % name)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False):
if USE_JINJA2_NATIVE and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
try:
# allows template header overrides to change jinja2 options.
if overrides is None:
myenv = self.environment.overlay()
else:
myenv = self.environment.overlay(overrides)
# Get jinja env overrides from template
if hasattr(data, 'startswith') and data.startswith(JINJA2_OVERRIDE):
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
(key, val) = pair.split(':')
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
# Adds Ansible custom filters and tests
myenv.filters.update(self._get_filters())
myenv.tests.update(self._get_tests())
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
# jinja2 global is inconsistent across versions, this normalizes them
t.globals['dict'] = dict
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
else:
t.globals['lookup'] = self._lookup
t.globals['query'] = t.globals['q'] = self._query_lookup
t.globals['now'] = self._now_datetime
t.globals['finalize'] = self._finalize
jvars = AnsibleJ2Vars(self, t.globals)
self.cur_context = new_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(new_context)
try:
res = j2_concat(rf)
if getattr(new_context, 'unsafe', False):
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
if USE_JINJA2_NATIVE and not isinstance(res, string_types):
return res
if preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# jinja2 added a keep_trailing_newline option in 2.7 when
# creating an Environment. That would let us make this code
# better (remove a single newline if
# preserve_trailing_newlines is False). Once we can depend on
# that version being present, modify our code to set that when
# initializing self.environment and remove a single trailing
# newline here if preserve_newlines is False.
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,045 |
vmware_host_vmhba_facts and vmware_host_vmhba_info truncate WWN of FC HBA
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
vmware_host_vmhba_facts module returns the decimal formatted WWN/WWPN of the FiberChannel HBA, but truncates the last digit, making it impossible to convert to hex or determine the real HBA WWPN for zoning/mapping. even the help text of the module on the https://docs.ansible.com/ansible/latest/modules/vmware_host_vmhba_facts_module.html site displays the truncated value that is useless.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host_vmhba_facts and vmware_host_vmhba_info modules
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible --version
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
#ansible-config dump --only-changed
#
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
VMware ESXi, 6.7.0
VMWare ESXi, 6.5.0
QLogic 57840 10/20 Gigabit Ethernet Adapter "driver": "qfle3f",
CentOS7
ansible 2.8
ansible 2.10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
gather HBA data from ESX servers using the vmware_host_vmhba_info or vmware_host_vmhba_facts modules. The output of the port_wwn and node_wwn are returned in decimal format with colon separation and truncated by one character. For Example, 11:53:10:06:16:15:36:94:72 should be 11:53:10:06:16:15:36:94:725. The colons are not needed since this is a decimal value, which may also be a module problem. They are even truncated in the example help text of the module (576496321590239521)!!! , showing this hasn't been tested properly since the beginning of module creation and has been a bug since inception! This same bug is present on the newest version of the module in ansbile 2.10 currently in development as well.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather vmware host HBA facts from vCenter
vmware_host_vmhba_info:
cluster_name: '{{ cluster }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
hostname: '{{ vcenter_hostname }}'
validate_certs: no
register: host_facts
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected the decimal number representation of the WWN returned by vmware to be returned in full so that it could be converted to HEX and to the actual WWPN/WWN for zoning/mapping/etc. port_wwn value should return enough digits to display the fiberchannel wwn in full.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The decimal formatted number is returned with a truncated last digit, making the value invalid and it cannot be converted to HEX format.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather vmware host HBA facts from vCenter] *************************************************************************************************************************************************************************************
task path: /root/hbagather.yml:24
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052 `" && echo ansible-tmp-1570019424.92-77457880685052="` echo /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_host_vmhba_info.py
<localhost> PUT /root/.ansible/tmp/ansible-local-776496XYTxD/tmpPFZa23 TO /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/AnsiballZ_vmware_host_vmhba_info.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/ /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/AnsiballZ_vmware_host_vmhba_info.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/AnsiballZ_vmware_host_vmhba_info.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/ > /dev/null 2>&1 && sleep 0'
ok: [localhost -> localhost] => {
"changed": false,
"hosts_vmhbas_info": {
"hostname.example.com": {
"vmhba_details": [
{
"adapter": "HPE Smart Array P220i",
"bus": 34,
"device": "vmhba0",
"driver": "nhpsa",
"location": "0000:22:00.0",
"model": "Smart Array P220i",
"node_wwn": "50:01:43:80:32:53:90:a0",
"status": "unknown",
"type": "SAS"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 2,
"device": "vmhba32",
"driver": "bnx2fc",
"location": "0000:02:00.2",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:10:06:16:15:36:94:72",
"port_type": "unknown",
"port_wwn": "11:53:10:06:16:15:36:94:72",
"speed": 0,
"status": "online",
"type": "FibreChannelOverEthernetHba"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 2,
"device": "vmhba33",
"driver": "bnx2fc",
"location": "0000:02:00.3",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:04:10:85:47:50:56:04",
"port_type": "unknown",
"port_wwn": "23:05:96:25:90:08:19:03:01",
"speed": 0,
"status": "unknown",
"type": "FibreChannelOverEthernetHba"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 33,
"device": "vmhba34",
"driver": "bnx2fc",
"location": "0000:21:00.2",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:04:10:85:47:50:57:39",
"port_type": "unknown",
"port_wwn": "23:05:96:25:90:08:19:04:36",
"speed": 0,
"status": "unknown",
"type": "FibreChannelOverEthernetHba"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 33,
"device": "vmhba35",
"driver": "bnx2fc",
"location": "0000:21:00.3",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:10:06:16:15:36:94:72",
"port_type": "unknown",
"port_wwn": "11:53:10:06:16:15:36:94:72",
"speed": 0,
"status": "online",
"type": "FibreChannelOverEthernetHba"
}
]
},
```
|
https://github.com/ansible/ansible/issues/63045
|
https://github.com/ansible/ansible/pull/66692
|
5c1fe78685713707af6351838effe5912990981c
|
65aedc5d4aee95a6a7983f208671f6c614b46e3c
| 2019-10-02T13:00:44Z |
python
| 2020-01-28T04:16:29Z |
changelogs/fragments/66692-vmware_host_vmhba_info_fix_63045.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,045 |
vmware_host_vmhba_facts and vmware_host_vmhba_info truncate WWN of FC HBA
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
vmware_host_vmhba_facts module returns the decimal formatted WWN/WWPN of the FiberChannel HBA, but truncates the last digit, making it impossible to convert to hex or determine the real HBA WWPN for zoning/mapping. even the help text of the module on the https://docs.ansible.com/ansible/latest/modules/vmware_host_vmhba_facts_module.html site displays the truncated value that is useless.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host_vmhba_facts and vmware_host_vmhba_info modules
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible --version
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
#ansible-config dump --only-changed
#
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
VMware ESXi, 6.7.0
VMWare ESXi, 6.5.0
QLogic 57840 10/20 Gigabit Ethernet Adapter "driver": "qfle3f",
CentOS7
ansible 2.8
ansible 2.10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
gather HBA data from ESX servers using the vmware_host_vmhba_info or vmware_host_vmhba_facts modules. The output of the port_wwn and node_wwn are returned in decimal format with colon separation and truncated by one character. For Example, 11:53:10:06:16:15:36:94:72 should be 11:53:10:06:16:15:36:94:725. The colons are not needed since this is a decimal value, which may also be a module problem. They are even truncated in the example help text of the module (576496321590239521)!!! , showing this hasn't been tested properly since the beginning of module creation and has been a bug since inception! This same bug is present on the newest version of the module in ansbile 2.10 currently in development as well.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather vmware host HBA facts from vCenter
vmware_host_vmhba_info:
cluster_name: '{{ cluster }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
hostname: '{{ vcenter_hostname }}'
validate_certs: no
register: host_facts
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected the decimal number representation of the WWN returned by vmware to be returned in full so that it could be converted to HEX and to the actual WWPN/WWN for zoning/mapping/etc. port_wwn value should return enough digits to display the fiberchannel wwn in full.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The decimal formatted number is returned with a truncated last digit, making the value invalid and it cannot be converted to HEX format.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather vmware host HBA facts from vCenter] *************************************************************************************************************************************************************************************
task path: /root/hbagather.yml:24
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052 `" && echo ansible-tmp-1570019424.92-77457880685052="` echo /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_host_vmhba_info.py
<localhost> PUT /root/.ansible/tmp/ansible-local-776496XYTxD/tmpPFZa23 TO /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/AnsiballZ_vmware_host_vmhba_info.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/ /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/AnsiballZ_vmware_host_vmhba_info.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/AnsiballZ_vmware_host_vmhba_info.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1570019424.92-77457880685052/ > /dev/null 2>&1 && sleep 0'
ok: [localhost -> localhost] => {
"changed": false,
"hosts_vmhbas_info": {
"hostname.example.com": {
"vmhba_details": [
{
"adapter": "HPE Smart Array P220i",
"bus": 34,
"device": "vmhba0",
"driver": "nhpsa",
"location": "0000:22:00.0",
"model": "Smart Array P220i",
"node_wwn": "50:01:43:80:32:53:90:a0",
"status": "unknown",
"type": "SAS"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 2,
"device": "vmhba32",
"driver": "bnx2fc",
"location": "0000:02:00.2",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:10:06:16:15:36:94:72",
"port_type": "unknown",
"port_wwn": "11:53:10:06:16:15:36:94:72",
"speed": 0,
"status": "online",
"type": "FibreChannelOverEthernetHba"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 2,
"device": "vmhba33",
"driver": "bnx2fc",
"location": "0000:02:00.3",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:04:10:85:47:50:56:04",
"port_type": "unknown",
"port_wwn": "23:05:96:25:90:08:19:03:01",
"speed": 0,
"status": "unknown",
"type": "FibreChannelOverEthernetHba"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 33,
"device": "vmhba34",
"driver": "bnx2fc",
"location": "0000:21:00.2",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:04:10:85:47:50:57:39",
"port_type": "unknown",
"port_wwn": "23:05:96:25:90:08:19:04:36",
"speed": 0,
"status": "unknown",
"type": "FibreChannelOverEthernetHba"
},
{
"adapter": "Broadcom Corporation QLogic 57840 10/20 Gigabit Ethernet Adapter",
"bus": 33,
"device": "vmhba35",
"driver": "bnx2fc",
"location": "0000:21:00.3",
"model": "QLogic 57840 10/20 Gigabit Ethernet Adapter",
"node_wwn": "11:53:10:06:16:15:36:94:72",
"port_type": "unknown",
"port_wwn": "11:53:10:06:16:15:36:94:72",
"speed": 0,
"status": "online",
"type": "FibreChannelOverEthernetHba"
}
]
},
```
|
https://github.com/ansible/ansible/issues/63045
|
https://github.com/ansible/ansible/pull/66692
|
5c1fe78685713707af6351838effe5912990981c
|
65aedc5d4aee95a6a7983f208671f6c614b46e3c
| 2019-10-02T13:00:44Z |
python
| 2020-01-28T04:16:29Z |
lib/ansible/modules/cloud/vmware/vmware_host_vmhba_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Christian Kotte <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_vmhba_info
short_description: Gathers info about vmhbas available on the given ESXi host
description:
- This module can be used to gather information about vmhbas available on the given ESXi host.
- If C(cluster_name) is provided, then vmhba information about all hosts from given cluster will be returned.
- If C(esxi_hostname) is provided, then vmhba information about given host system will be returned.
version_added: '2.9'
author:
- Christian Kotte (@ckotte)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
esxi_hostname:
description:
- Name of the host system to work with.
- Vmhba information about this ESXi server will be returned.
- This parameter is required if C(cluster_name) is not specified.
type: str
cluster_name:
description:
- Name of the cluster from which all host systems will be used.
- Vmhba information about each ESXi server will be returned for the given cluster.
- This parameter is required if C(esxi_hostname) is not specified.
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Gather info about vmhbas of all ESXi Host in the given Cluster
vmware_host_vmhba_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ cluster_name }}'
delegate_to: localhost
register: cluster_host_vmhbas
- name: Gather info about vmhbas of an ESXi Host
vmware_host_vmhba_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
delegate_to: localhost
register: host_vmhbas
'''
RETURN = r'''
hosts_vmhbas_info:
description:
- dict with hostname as key and dict with vmhbas information as value.
returned: hosts_vmhbas_info
type: dict
sample:
{
"10.76.33.204": {
"vmhba_details": [
{
"adapter": "HPE Smart Array P440ar",
"bus": 3,
"device": "vmhba0",
"driver": "nhpsa",
"location": "0000:03:00.0",
"model": "Smart Array P440ar",
"node_wwn": "50:01:43:80:37:18:9e:a0",
"status": "unknown",
"type": "SAS"
},
{
"adapter": "QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA",
"bus": 5,
"device": "vmhba1",
"driver": "qlnativefc",
"location": "0000:05:00.0",
"model": "ISP2532-based 8Gb Fibre Channel to PCI Express HBA",
"node_wwn": "57:64:96:32:15:90:23:95:82",
"port_type": "unknown",
"port_wwn": "57:64:96:32:15:90:23:95:82",
"speed": 8,
"status": "online",
"type": "Fibre Channel"
},
{
"adapter": "QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA",
"bus": 8,
"device": "vmhba2",
"driver": "qlnativefc",
"location": "0000:08:00.0",
"model": "ISP2532-based 8Gb Fibre Channel to PCI Express HBA",
"node_wwn": "57:64:96:32:15:90:23:95:21",
"port_type": "unknown",
"port_wwn": "57:64:96:32:15:90:23:95:21",
"speed": 8,
"status": "online",
"type": "Fibre Channel"
}
],
}
}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
class HostVmhbaMgr(PyVmomi):
"""Class to manage vmhba info"""
def __init__(self, module):
super(HostVmhbaMgr, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
if not self.hosts:
self.module.fail_json(msg="Failed to find host system.")
def gather_host_vmhba_info(self):
"""Gather vmhba info"""
hosts_vmhba_info = {}
for host in self.hosts:
host_vmhba_info = dict()
host_st_system = host.configManager.storageSystem
if host_st_system:
device_info = host_st_system.storageDeviceInfo
host_vmhba_info['vmhba_details'] = []
for hba in device_info.hostBusAdapter:
hba_info = dict()
if hba.pci:
hba_info['location'] = hba.pci
for pci_device in host.hardware.pciDevice:
if pci_device.id == hba.pci:
hba_info['adapter'] = pci_device.vendorName + ' ' + pci_device.deviceName
break
else:
hba_info['location'] = 'PCI'
hba_info['device'] = hba.device
# contains type as string in format of 'key-vim.host.FibreChannelHba-vmhba1'
hba_type = hba.key.split(".")[-1].split("-")[0]
if hba_type == 'SerialAttachedHba':
hba_info['type'] = 'SAS'
elif hba_type == 'FibreChannelHba':
hba_info['type'] = 'Fibre Channel'
else:
hba_info['type'] = hba_type
hba_info['bus'] = hba.bus
hba_info['status'] = hba.status
hba_info['model'] = hba.model
hba_info['driver'] = hba.driver
try:
hba_info['node_wwn'] = self.format_number(hba.nodeWorldWideName)
except AttributeError:
pass
try:
hba_info['port_wwn'] = self.format_number(hba.portWorldWideName)
except AttributeError:
pass
try:
hba_info['port_type'] = hba.portType
except AttributeError:
pass
try:
hba_info['speed'] = hba.speed
except AttributeError:
pass
host_vmhba_info['vmhba_details'].append(hba_info)
hosts_vmhba_info[host.name] = host_vmhba_info
return hosts_vmhba_info
@staticmethod
def format_number(number):
"""Format number"""
string = str(number)
return ':'.join(a + b for a, b in zip(string[::2], string[1::2]))
def main():
"""Main"""
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['cluster_name', 'esxi_hostname'],
],
supports_check_mode=True,
)
host_vmhba_mgr = HostVmhbaMgr(module)
module.exit_json(changed=False, hosts_vmhbas_info=host_vmhba_mgr.gather_host_vmhba_info())
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,707 |
vultr_server_info fails with "Could not find plans with VPSPLANID: 200"
|
<!--- Verify first that your issue is not already reported on GitHub -->
Not found.
<!--- Also test if the latest release and devel branch are affected too -->
Apologies, I'm not in a position to do that now.
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
vultr_server_info crashes for a server using a no-longer-available plan.
I have a vultr server purchased as special offer. That offer is no longer available (boo). It uses plan 200.
I suggest vultr_server_info should still return other server data when it cannot find information out about a plan.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vultr_server_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/dylan/ansible/ansible.cfg
configured module search path = ['/home/dylan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Oct 11 2019, 11:15:58) [Clang 8.0.1 (tags/RELEASE_801/final)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/dylan/ansible/ansible.cfg) = -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30
DEFAULT_BECOME_METHOD(/home/dylan/ansible/ansible.cfg) = doas
DEFAULT_HOST_LIST(/home/dylan/ansible/ansible.cfg) = ['/home/dylan/ansible/inventory']
DEFAULT_VAULT_PASSWORD_FILE(/home/dylan/ansible/ansible.cfg) = /home/dylan/.vault_password
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OpenBSD 6.6 fully patched (master and target). Ansible acquired from pip/pip3. vultr_server_info actually run on target , which is the vultr server in question. Vultr can provide all the info about the vm hardware, etc., I hope.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Ansible-playbook running script including yml below. Clearly, one needs a vultr account!
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather Vultr servers information
vultr_server_info:
api_key: "{{ vultr_api }}"
register: result
- name: Print the gathered information
debug:
var: result.vultr_server_info
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect to get information about my vultr server(s).
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
task path: /home/dylan/ansible/tasks/vultr/info.yml:123
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<deaddog.example.com> (0, b'/home/ansible\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" && echo ansible-tmp-1579766233.5392485-31526552794438="` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" ) && sleep 0'"'"''
<deaddog.example.com> (0, b'ansible-tmp-1579766233.5392485-31526552794438=/home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438\n', b'')
Using module file /usr/local/lib/python3.7/site-packages/ansible/modules/cloud/vultr/vultr_server_info.py
<deaddog.example.com> PUT /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 TO /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py
<deaddog.example.com> SSH: EXEC sftp -b - -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed '[deaddog.example.com]'
<deaddog.example.com> (0, b'sftp> put /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed -tt deaddog.example.com '/bin/sh -c '"'"'doas /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-kyusbotingfhthztzyzhtjhwrwcuxdvz ; /usr/local/bin/python3 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<deaddog.example.com> (1, b'\r\r\n\r\n{"msg": "Could not find plans with VPSPLANID: 200", "failed": true, "invocation": {"module_args": {"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "api_account": "default", "validate_certs": true, "api_timeout": null, "api_retries": null, "api_retry_max_delay": null, "api_endpoint": null}}}\r\n', b'Connection to deaddog.example.com closed.\r\n')
<deaddog.example.com> Failed to connect to the host via ssh: Connection to deaddog.example.com closed.
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ > /dev/null 2>&1 && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
fatal: [butterfly]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_account": "default",
"api_endpoint": null,
"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_retries": null,
"api_retry_max_delay": null,
"api_timeout": null,
"validate_certs": true
}
},
"msg": "Could not find plans with VPSPLANID: 200"
}
[[NOTE: identifying information may have been nibbled ]]
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
fatal: [butterfly]: FAILED! => {"changed": false, "msg": "Could not find plans with VPSPLANID: 200"}
```
|
https://github.com/ansible/ansible/issues/66707
|
https://github.com/ansible/ansible/pull/66792
|
2dc9841806499810f55c8284bef3d8206ccb20ee
|
78e666dd39e76c99e2c6d52a07cfa5cba175114a
| 2020-01-23T08:07:50Z |
python
| 2020-01-28T09:46:17Z |
changelogs/fragments/66792-vultr-improve-plan.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,707 |
vultr_server_info fails with "Could not find plans with VPSPLANID: 200"
|
<!--- Verify first that your issue is not already reported on GitHub -->
Not found.
<!--- Also test if the latest release and devel branch are affected too -->
Apologies, I'm not in a position to do that now.
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
vultr_server_info crashes for a server using a no-longer-available plan.
I have a vultr server purchased as special offer. That offer is no longer available (boo). It uses plan 200.
I suggest vultr_server_info should still return other server data when it cannot find information out about a plan.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vultr_server_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/dylan/ansible/ansible.cfg
configured module search path = ['/home/dylan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Oct 11 2019, 11:15:58) [Clang 8.0.1 (tags/RELEASE_801/final)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/dylan/ansible/ansible.cfg) = -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30
DEFAULT_BECOME_METHOD(/home/dylan/ansible/ansible.cfg) = doas
DEFAULT_HOST_LIST(/home/dylan/ansible/ansible.cfg) = ['/home/dylan/ansible/inventory']
DEFAULT_VAULT_PASSWORD_FILE(/home/dylan/ansible/ansible.cfg) = /home/dylan/.vault_password
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OpenBSD 6.6 fully patched (master and target). Ansible acquired from pip/pip3. vultr_server_info actually run on target , which is the vultr server in question. Vultr can provide all the info about the vm hardware, etc., I hope.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Ansible-playbook running script including yml below. Clearly, one needs a vultr account!
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather Vultr servers information
vultr_server_info:
api_key: "{{ vultr_api }}"
register: result
- name: Print the gathered information
debug:
var: result.vultr_server_info
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect to get information about my vultr server(s).
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
task path: /home/dylan/ansible/tasks/vultr/info.yml:123
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<deaddog.example.com> (0, b'/home/ansible\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" && echo ansible-tmp-1579766233.5392485-31526552794438="` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" ) && sleep 0'"'"''
<deaddog.example.com> (0, b'ansible-tmp-1579766233.5392485-31526552794438=/home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438\n', b'')
Using module file /usr/local/lib/python3.7/site-packages/ansible/modules/cloud/vultr/vultr_server_info.py
<deaddog.example.com> PUT /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 TO /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py
<deaddog.example.com> SSH: EXEC sftp -b - -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed '[deaddog.example.com]'
<deaddog.example.com> (0, b'sftp> put /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed -tt deaddog.example.com '/bin/sh -c '"'"'doas /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-kyusbotingfhthztzyzhtjhwrwcuxdvz ; /usr/local/bin/python3 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<deaddog.example.com> (1, b'\r\r\n\r\n{"msg": "Could not find plans with VPSPLANID: 200", "failed": true, "invocation": {"module_args": {"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "api_account": "default", "validate_certs": true, "api_timeout": null, "api_retries": null, "api_retry_max_delay": null, "api_endpoint": null}}}\r\n', b'Connection to deaddog.example.com closed.\r\n')
<deaddog.example.com> Failed to connect to the host via ssh: Connection to deaddog.example.com closed.
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ > /dev/null 2>&1 && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
fatal: [butterfly]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_account": "default",
"api_endpoint": null,
"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_retries": null,
"api_retry_max_delay": null,
"api_timeout": null,
"validate_certs": true
}
},
"msg": "Could not find plans with VPSPLANID: 200"
}
[[NOTE: identifying information may have been nibbled ]]
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
fatal: [butterfly]: FAILED! => {"changed": false, "msg": "Could not find plans with VPSPLANID: 200"}
```
|
https://github.com/ansible/ansible/issues/66707
|
https://github.com/ansible/ansible/pull/66792
|
2dc9841806499810f55c8284bef3d8206ccb20ee
|
78e666dd39e76c99e2c6d52a07cfa5cba175114a
| 2020-01-23T08:07:50Z |
python
| 2020-01-28T09:46:17Z |
lib/ansible/module_utils/vultr.py
|
# -*- coding: utf-8 -*-
# (c) 2017, René Moser <[email protected]>
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import time
import random
import urllib
from ansible.module_utils.six.moves import configparser
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.urls import fetch_url
VULTR_API_ENDPOINT = "https://api.vultr.com"
VULTR_USER_AGENT = 'Ansible Vultr'
def vultr_argument_spec():
return dict(
api_key=dict(type='str', default=os.environ.get('VULTR_API_KEY'), no_log=True),
api_timeout=dict(type='int', default=os.environ.get('VULTR_API_TIMEOUT')),
api_retries=dict(type='int', default=os.environ.get('VULTR_API_RETRIES')),
api_retry_max_delay=dict(type='int', default=os.environ.get('VULTR_API_RETRY_MAX_DELAY')),
api_account=dict(type='str', default=os.environ.get('VULTR_API_ACCOUNT') or 'default'),
api_endpoint=dict(type='str', default=os.environ.get('VULTR_API_ENDPOINT')),
validate_certs=dict(type='bool', default=True),
)
class Vultr:
def __init__(self, module, namespace):
if module._name.startswith('vr_'):
module.deprecate("The Vultr modules were renamed. The prefix of the modules changed from vr_ to vultr_", version='2.11')
self.module = module
# Namespace use for returns
self.namespace = namespace
self.result = {
'changed': False,
namespace: dict(),
'diff': dict(before=dict(), after=dict())
}
# For caching HTTP API responses
self.api_cache = dict()
try:
config = self.read_env_variables()
config.update(Vultr.read_ini_config(self.module.params.get('api_account')))
except KeyError:
config = {}
try:
self.api_config = {
'api_key': self.module.params.get('api_key') or config.get('key'),
'api_timeout': self.module.params.get('api_timeout') or int(config.get('timeout') or 60),
'api_retries': self.module.params.get('api_retries') or int(config.get('retries') or 5),
'api_retry_max_delay': self.module.params.get('api_retries') or int(config.get('retry_max_delay') or 12),
'api_endpoint': self.module.params.get('api_endpoint') or config.get('endpoint') or VULTR_API_ENDPOINT,
}
except ValueError as e:
self.fail_json(msg="One of the following settings, "
"in section '%s' in the ini config file has not an int value: timeout, retries. "
"Error was %s" % (self.module.params.get('api_account'), to_native(e)))
if not self.api_config.get('api_key'):
self.module.fail_json(msg="The API key is not speicied. Please refer to the documentation.")
# Common vultr returns
self.result['vultr_api'] = {
'api_account': self.module.params.get('api_account'),
'api_timeout': self.api_config['api_timeout'],
'api_retries': self.api_config['api_retries'],
'api_retry_max_delay': self.api_config['api_retry_max_delay'],
'api_endpoint': self.api_config['api_endpoint'],
}
# Headers to be passed to the API
self.headers = {
'API-Key': "%s" % self.api_config['api_key'],
'User-Agent': VULTR_USER_AGENT,
'Accept': 'application/json',
}
def read_env_variables(self):
keys = ['key', 'timeout', 'retries', 'retry_max_delay', 'endpoint']
env_conf = {}
for key in keys:
if 'VULTR_API_%s' % key.upper() not in os.environ:
continue
env_conf[key] = os.environ['VULTR_API_%s' % key.upper()]
return env_conf
@staticmethod
def read_ini_config(ini_group):
paths = (
os.path.join(os.path.expanduser('~'), '.vultr.ini'),
os.path.join(os.getcwd(), 'vultr.ini'),
)
if 'VULTR_API_CONFIG' in os.environ:
paths += (os.path.expanduser(os.environ['VULTR_API_CONFIG']),)
conf = configparser.ConfigParser()
conf.read(paths)
if not conf._sections.get(ini_group):
return dict()
return dict(conf.items(ini_group))
def fail_json(self, **kwargs):
self.result.update(kwargs)
self.module.fail_json(**self.result)
def get_yes_or_no(self, key):
if self.module.params.get(key) is not None:
return 'yes' if self.module.params.get(key) is True else 'no'
def switch_enable_disable(self, resource, param_key, resource_key=None):
if resource_key is None:
resource_key = param_key
param = self.module.params.get(param_key)
if param is None:
return
r_value = resource.get(resource_key)
if r_value in ['yes', 'no']:
if param and r_value != 'yes':
return "enable"
elif not param and r_value != 'no':
return "disable"
else:
if param and not r_value:
return "enable"
elif not param and r_value:
return "disable"
def api_query(self, path="/", method="GET", data=None):
url = self.api_config['api_endpoint'] + path
if data:
data_encoded = dict()
data_list = ""
for k, v in data.items():
if isinstance(v, list):
for s in v:
try:
data_list += '&%s[]=%s' % (k, urllib.quote(s))
except AttributeError:
data_list += '&%s[]=%s' % (k, urllib.parse.quote(s))
elif v is not None:
data_encoded[k] = v
try:
data = urllib.urlencode(data_encoded) + data_list
except AttributeError:
data = urllib.parse.urlencode(data_encoded) + data_list
retry_max_delay = self.api_config['api_retry_max_delay']
randomness = random.randint(0, 1000) / 1000.0
for retry in range(0, self.api_config['api_retries']):
response, info = fetch_url(
module=self.module,
url=url,
data=data,
method=method,
headers=self.headers,
timeout=self.api_config['api_timeout'],
)
if info.get('status') == 200:
break
# Vultr has a rate limiting requests per second, try to be polite
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomness
if delay > retry_max_delay:
delay = retry_max_delay + randomness
time.sleep(delay)
else:
self.fail_json(msg="Reached API retries limit %s for URL %s, method %s with data %s. Returned %s, with body: %s %s" % (
self.api_config['api_retries'],
url,
method,
data,
info['status'],
info['msg'],
info.get('body')
))
if info.get('status') != 200:
self.fail_json(msg="URL %s, method %s with data %s. Returned %s, with body: %s %s" % (
url,
method,
data,
info['status'],
info['msg'],
info.get('body')
))
res = response.read()
if not res:
return {}
try:
return self.module.from_json(to_native(res)) or {}
except ValueError as e:
self.module.fail_json(msg="Could not process response into json: %s" % e)
def query_resource_by_key(self, key, value, resource='regions', query_by='list', params=None, use_cache=False, id_key=None):
if not value:
return {}
r_list = None
if use_cache:
r_list = self.api_cache.get(resource)
if not r_list:
r_list = self.api_query(path="/v1/%s/%s" % (resource, query_by), data=params)
if use_cache:
self.api_cache.update({
resource: r_list
})
if not r_list:
return {}
elif isinstance(r_list, list):
for r_data in r_list:
if str(r_data[key]) == str(value):
return r_data
if id_key is not None and to_text(r_data[id_key]) == to_text(value):
return r_data
elif isinstance(r_list, dict):
for r_id, r_data in r_list.items():
if str(r_data[key]) == str(value):
return r_data
if id_key is not None and to_text(r_data[id_key]) == to_text(value):
return r_data
if id_key:
msg = "Could not find %s with ID or %s: %s" % (resource, key, value)
else:
msg = "Could not find %s with %s: %s" % (resource, key, value)
self.module.fail_json(msg=msg)
@staticmethod
def normalize_result(resource, schema, remove_missing_keys=True):
if remove_missing_keys:
fields_to_remove = set(resource.keys()) - set(schema.keys())
for field in fields_to_remove:
resource.pop(field)
for search_key, config in schema.items():
if search_key in resource:
if 'convert_to' in config:
if config['convert_to'] == 'int':
resource[search_key] = int(resource[search_key])
elif config['convert_to'] == 'float':
resource[search_key] = float(resource[search_key])
elif config['convert_to'] == 'bool':
resource[search_key] = True if resource[search_key] == 'yes' else False
if 'transform' in config:
resource[search_key] = config['transform'](resource[search_key])
if 'key' in config:
resource[config['key']] = resource[search_key]
del resource[search_key]
return resource
def get_result(self, resource):
if resource:
if isinstance(resource, list):
self.result[self.namespace] = [Vultr.normalize_result(item, self.returns) for item in resource]
else:
self.result[self.namespace] = Vultr.normalize_result(resource, self.returns)
return self.result
def get_plan(self, plan=None, key='name'):
value = plan or self.module.params.get('plan')
return self.query_resource_by_key(
key=key,
value=value,
resource='plans',
use_cache=True
)
def get_firewallgroup(self, firewallgroup=None, key='description'):
value = firewallgroup or self.module.params.get('firewallgroup')
return self.query_resource_by_key(
key=key,
value=value,
resource='firewall',
query_by='group_list',
use_cache=True
)
def get_application(self, application=None, key='name'):
value = application or self.module.params.get('application')
return self.query_resource_by_key(
key=key,
value=value,
resource='app',
use_cache=True
)
def get_region(self, region=None, key='name'):
value = region or self.module.params.get('region')
return self.query_resource_by_key(
key=key,
value=value,
resource='regions',
use_cache=True
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,707 |
vultr_server_info fails with "Could not find plans with VPSPLANID: 200"
|
<!--- Verify first that your issue is not already reported on GitHub -->
Not found.
<!--- Also test if the latest release and devel branch are affected too -->
Apologies, I'm not in a position to do that now.
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
vultr_server_info crashes for a server using a no-longer-available plan.
I have a vultr server purchased as special offer. That offer is no longer available (boo). It uses plan 200.
I suggest vultr_server_info should still return other server data when it cannot find information out about a plan.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vultr_server_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/dylan/ansible/ansible.cfg
configured module search path = ['/home/dylan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Oct 11 2019, 11:15:58) [Clang 8.0.1 (tags/RELEASE_801/final)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/dylan/ansible/ansible.cfg) = -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30
DEFAULT_BECOME_METHOD(/home/dylan/ansible/ansible.cfg) = doas
DEFAULT_HOST_LIST(/home/dylan/ansible/ansible.cfg) = ['/home/dylan/ansible/inventory']
DEFAULT_VAULT_PASSWORD_FILE(/home/dylan/ansible/ansible.cfg) = /home/dylan/.vault_password
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OpenBSD 6.6 fully patched (master and target). Ansible acquired from pip/pip3. vultr_server_info actually run on target , which is the vultr server in question. Vultr can provide all the info about the vm hardware, etc., I hope.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Ansible-playbook running script including yml below. Clearly, one needs a vultr account!
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather Vultr servers information
vultr_server_info:
api_key: "{{ vultr_api }}"
register: result
- name: Print the gathered information
debug:
var: result.vultr_server_info
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect to get information about my vultr server(s).
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
task path: /home/dylan/ansible/tasks/vultr/info.yml:123
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<deaddog.example.com> (0, b'/home/ansible\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" && echo ansible-tmp-1579766233.5392485-31526552794438="` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" ) && sleep 0'"'"''
<deaddog.example.com> (0, b'ansible-tmp-1579766233.5392485-31526552794438=/home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438\n', b'')
Using module file /usr/local/lib/python3.7/site-packages/ansible/modules/cloud/vultr/vultr_server_info.py
<deaddog.example.com> PUT /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 TO /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py
<deaddog.example.com> SSH: EXEC sftp -b - -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed '[deaddog.example.com]'
<deaddog.example.com> (0, b'sftp> put /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed -tt deaddog.example.com '/bin/sh -c '"'"'doas /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-kyusbotingfhthztzyzhtjhwrwcuxdvz ; /usr/local/bin/python3 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<deaddog.example.com> (1, b'\r\r\n\r\n{"msg": "Could not find plans with VPSPLANID: 200", "failed": true, "invocation": {"module_args": {"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "api_account": "default", "validate_certs": true, "api_timeout": null, "api_retries": null, "api_retry_max_delay": null, "api_endpoint": null}}}\r\n', b'Connection to deaddog.example.com closed.\r\n')
<deaddog.example.com> Failed to connect to the host via ssh: Connection to deaddog.example.com closed.
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ > /dev/null 2>&1 && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
fatal: [butterfly]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_account": "default",
"api_endpoint": null,
"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_retries": null,
"api_retry_max_delay": null,
"api_timeout": null,
"validate_certs": true
}
},
"msg": "Could not find plans with VPSPLANID: 200"
}
[[NOTE: identifying information may have been nibbled ]]
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
fatal: [butterfly]: FAILED! => {"changed": false, "msg": "Could not find plans with VPSPLANID: 200"}
```
|
https://github.com/ansible/ansible/issues/66707
|
https://github.com/ansible/ansible/pull/66792
|
2dc9841806499810f55c8284bef3d8206ccb20ee
|
78e666dd39e76c99e2c6d52a07cfa5cba175114a
| 2020-01-23T08:07:50Z |
python
| 2020-01-28T09:46:17Z |
lib/ansible/modules/cloud/vultr/vultr_server.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2017, René Moser <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: vultr_server
short_description: Manages virtual servers on Vultr.
description:
- Deploy, start, stop, update, restart, reinstall servers.
version_added: "2.5"
author: "René Moser (@resmo)"
options:
name:
description:
- Name of the server.
required: true
aliases: [ label ]
type: str
hostname:
description:
- The hostname to assign to this server.
type: str
os:
description:
- The operating system name or ID.
- Required if the server does not yet exist and is not restoring from a snapshot.
type: str
snapshot:
version_added: "2.8"
description:
- Name or ID of the snapshot to restore the server from.
type: str
firewall_group:
description:
- The firewall group description or ID to assign this server to.
type: str
plan:
description:
- Plan name or ID to use for the server.
- Required if the server does not yet exist.
type: str
force:
description:
- Force stop/start the server if required to apply changes
- Otherwise a running server will not be changed.
type: bool
default: no
notify_activate:
description:
- Whether to send an activation email when the server is ready or not.
- Only considered on creation.
type: bool
private_network_enabled:
description:
- Whether to enable private networking or not.
type: bool
auto_backup_enabled:
description:
- Whether to enable automatic backups or not.
type: bool
ipv6_enabled:
description:
- Whether to enable IPv6 or not.
type: bool
tag:
description:
- Tag for the server.
type: str
user_data:
description:
- User data to be passed to the server.
type: str
startup_script:
description:
- Name or ID of the startup script to execute on boot.
- Only considered while creating the server.
type: str
ssh_keys:
description:
- List of SSH key names or IDs passed to the server on creation.
aliases: [ ssh_key ]
type: list
reserved_ip_v4:
description:
- IP address of the floating IP to use as the main IP of this server.
- Only considered on creation.
type: str
region:
description:
- Region name or ID the server is deployed into.
- Required if the server does not yet exist.
type: str
state:
description:
- State of the server.
default: present
choices: [ present, absent, restarted, reinstalled, started, stopped ]
type: str
extends_documentation_fragment: vultr
'''
EXAMPLES = '''
- name: create server
delegate_to: localhost
vultr_server:
name: "{{ vultr_server_name }}"
os: CentOS 7 x64
plan: 1024 MB RAM,25 GB SSD,1.00 TB BW
ssh_keys:
- my_key
- your_key
region: Amsterdam
state: present
- name: ensure a server is present and started
delegate_to: localhost
vultr_server:
name: "{{ vultr_server_name }}"
os: CentOS 7 x64
plan: 1024 MB RAM,25 GB SSD,1.00 TB BW
firewall_group: my_group
ssh_key: my_key
region: Amsterdam
state: started
- name: ensure a server is present and stopped provisioned using IDs
delegate_to: localhost
vultr_server:
name: "{{ vultr_server_name }}"
os: "167"
plan: "201"
region: "7"
state: stopped
- name: ensure an existing server is stopped
delegate_to: localhost
vultr_server:
name: "{{ vultr_server_name }}"
state: stopped
- name: ensure an existing server is started
delegate_to: localhost
vultr_server:
name: "{{ vultr_server_name }}"
state: started
- name: ensure a server is absent
delegate_to: localhost
vultr_server:
name: "{{ vultr_server_name }}"
state: absent
'''
RETURN = '''
---
vultr_api:
description: Response from Vultr API with a few additions/modification
returned: success
type: complex
contains:
api_account:
description: Account used in the ini file to select the key
returned: success
type: str
sample: default
api_timeout:
description: Timeout used for the API requests
returned: success
type: int
sample: 60
api_retries:
description: Amount of max retries for the API requests
returned: success
type: int
sample: 5
api_retry_max_delay:
description: Exponential backoff delay in seconds between retries up to this max delay value.
returned: success
type: int
sample: 12
version_added: '2.9'
api_endpoint:
description: Endpoint used for the API requests
returned: success
type: str
sample: "https://api.vultr.com"
vultr_server:
description: Response from Vultr API with a few additions/modification
returned: success
type: complex
contains:
id:
description: ID of the server
returned: success
type: str
sample: 10194376
name:
description: Name (label) of the server
returned: success
type: str
sample: "ansible-test-vm"
plan:
description: Plan used for the server
returned: success
type: str
sample: "1024 MB RAM,25 GB SSD,1.00 TB BW"
allowed_bandwidth_gb:
description: Allowed bandwidth to use in GB
returned: success
type: int
sample: 1000
auto_backup_enabled:
description: Whether automatic backups are enabled
returned: success
type: bool
sample: false
cost_per_month:
description: Cost per month for the server
returned: success
type: float
sample: 5.00
current_bandwidth_gb:
description: Current bandwidth used for the server
returned: success
type: int
sample: 0
date_created:
description: Date when the server was created
returned: success
type: str
sample: "2017-08-26 12:47:48"
default_password:
description: Password to login as root into the server
returned: success
type: str
sample: "!p3EWYJm$qDWYaFr"
disk:
description: Information about the disk
returned: success
type: str
sample: "Virtual 25 GB"
v4_gateway:
description: IPv4 gateway
returned: success
type: str
sample: "45.32.232.1"
internal_ip:
description: Internal IP
returned: success
type: str
sample: ""
kvm_url:
description: URL to the VNC
returned: success
type: str
sample: "https://my.vultr.com/subs/vps/novnc/api.php?data=xyz"
region:
description: Region the server was deployed into
returned: success
type: str
sample: "Amsterdam"
v4_main_ip:
description: Main IPv4
returned: success
type: str
sample: "45.32.233.154"
v4_netmask:
description: Netmask IPv4
returned: success
type: str
sample: "255.255.254.0"
os:
description: Operating system used for the server
returned: success
type: str
sample: "CentOS 6 x64"
firewall_group:
description: Firewall group the server is assigned to
returned: success and available
type: str
sample: "CentOS 6 x64"
pending_charges:
description: Pending charges
returned: success
type: float
sample: 0.01
power_status:
description: Power status of the server
returned: success
type: str
sample: "running"
ram:
description: Information about the RAM size
returned: success
type: str
sample: "1024 MB"
server_state:
description: State about the server
returned: success
type: str
sample: "ok"
status:
description: Status about the deployment of the server
returned: success
type: str
sample: "active"
tag:
description: TBD
returned: success
type: str
sample: ""
v6_main_ip:
description: Main IPv6
returned: success
type: str
sample: ""
v6_network:
description: Network IPv6
returned: success
type: str
sample: ""
v6_network_size:
description: Network size IPv6
returned: success
type: str
sample: ""
v6_networks:
description: Networks IPv6
returned: success
type: list
sample: []
vcpu_count:
description: Virtual CPU count
returned: success
type: int
sample: 1
'''
import time
import base64
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text, to_bytes
from ansible.module_utils.vultr import (
Vultr,
vultr_argument_spec,
)
class AnsibleVultrServer(Vultr):
def __init__(self, module):
super(AnsibleVultrServer, self).__init__(module, "vultr_server")
self.server = None
self.returns = {
'SUBID': dict(key='id'),
'label': dict(key='name'),
'date_created': dict(),
'allowed_bandwidth_gb': dict(convert_to='int'),
'auto_backups': dict(key='auto_backup_enabled', convert_to='bool'),
'current_bandwidth_gb': dict(),
'kvm_url': dict(),
'default_password': dict(),
'internal_ip': dict(),
'disk': dict(),
'cost_per_month': dict(convert_to='float'),
'location': dict(key='region'),
'main_ip': dict(key='v4_main_ip'),
'network_v4': dict(key='v4_network'),
'gateway_v4': dict(key='v4_gateway'),
'os': dict(),
'pending_charges': dict(convert_to='float'),
'power_status': dict(),
'ram': dict(),
'plan': dict(),
'server_state': dict(),
'status': dict(),
'firewall_group': dict(),
'tag': dict(),
'v6_main_ip': dict(),
'v6_network': dict(),
'v6_network_size': dict(),
'v6_networks': dict(),
'vcpu_count': dict(convert_to='int'),
}
self.server_power_state = None
def get_startup_script(self):
return self.query_resource_by_key(
key='name',
value=self.module.params.get('startup_script'),
resource='startupscript',
)
def get_os(self):
if self.module.params.get('snapshot'):
os_name = 'Snapshot'
else:
os_name = self.module.params.get('os')
return self.query_resource_by_key(
key='name',
value=os_name,
resource='os',
use_cache=True,
id_key='OSID',
)
def get_snapshot(self):
return self.query_resource_by_key(
key='description',
value=self.module.params.get('snapshot'),
resource='snapshot',
id_key='SNAPSHOTID',
)
def get_ssh_keys(self):
ssh_key_names = self.module.params.get('ssh_keys')
if not ssh_key_names:
return []
ssh_keys = []
for ssh_key_name in ssh_key_names:
ssh_key = self.query_resource_by_key(
key='name',
value=ssh_key_name,
resource='sshkey',
use_cache=True,
id_key='SSHKEYID',
)
if ssh_key:
ssh_keys.append(ssh_key)
return ssh_keys
def get_region(self):
return self.query_resource_by_key(
key='name',
value=self.module.params.get('region'),
resource='regions',
use_cache=True,
id_key='DCID',
)
def get_plan(self):
return self.query_resource_by_key(
key='name',
value=self.module.params.get('plan'),
resource='plans',
use_cache=True,
id_key='VPSPLANID',
)
def get_firewall_group(self):
return self.query_resource_by_key(
key='description',
value=self.module.params.get('firewall_group'),
resource='firewall',
query_by='group_list',
id_key='FIREWALLGROUPID'
)
def get_user_data(self):
user_data = self.module.params.get('user_data')
if user_data is not None:
user_data = to_text(base64.b64encode(to_bytes(user_data)))
return user_data
def get_server_user_data(self, server):
if not server or not server.get('SUBID'):
return None
user_data = self.api_query(path="/v1/server/get_user_data?SUBID=%s" % server.get('SUBID'))
return user_data.get('userdata')
def get_server(self, refresh=False):
if self.server is None or refresh:
self.server = None
server_list = self.api_query(path="/v1/server/list")
if server_list:
for server_id, server_data in server_list.items():
if server_data.get('label') == self.module.params.get('name'):
self.server = server_data
plan = self.query_resource_by_key(
key='VPSPLANID',
value=server_data['VPSPLANID'],
resource='plans',
use_cache=True
)
self.server['plan'] = plan.get('name')
os = self.query_resource_by_key(
key='OSID',
value=int(server_data['OSID']),
resource='os',
use_cache=True
)
self.server['os'] = os.get('name')
fwg_id = server_data.get('FIREWALLGROUPID')
fw = self.query_resource_by_key(
key='FIREWALLGROUPID',
value=server_data.get('FIREWALLGROUPID') if fwg_id and fwg_id != "0" else None,
resource='firewall',
query_by='group_list',
use_cache=True
)
self.server['firewall_group'] = fw.get('description')
return self.server
def present_server(self, start_server=True):
server = self.get_server()
if not server:
server = self._create_server(server=server)
else:
server = self._update_server(server=server, start_server=start_server)
return server
def _create_server(self, server=None):
required_params = [
'os',
'plan',
'region',
]
snapshot_restore = self.module.params.get('snapshot') is not None
if snapshot_restore:
required_params.remove('os')
self.module.fail_on_missing_params(required_params=required_params)
self.result['changed'] = True
if not self.module.check_mode:
data = {
'DCID': self.get_region().get('DCID'),
'VPSPLANID': self.get_plan().get('VPSPLANID'),
'FIREWALLGROUPID': self.get_firewall_group().get('FIREWALLGROUPID'),
'OSID': self.get_os().get('OSID'),
'SNAPSHOTID': self.get_snapshot().get('SNAPSHOTID'),
'label': self.module.params.get('name'),
'hostname': self.module.params.get('hostname'),
'SSHKEYID': ','.join([ssh_key['SSHKEYID'] for ssh_key in self.get_ssh_keys()]),
'enable_ipv6': self.get_yes_or_no('ipv6_enabled'),
'enable_private_network': self.get_yes_or_no('private_network_enabled'),
'auto_backups': self.get_yes_or_no('auto_backup_enabled'),
'notify_activate': self.get_yes_or_no('notify_activate'),
'tag': self.module.params.get('tag'),
'reserved_ip_v4': self.module.params.get('reserved_ip_v4'),
'user_data': self.get_user_data(),
'SCRIPTID': self.get_startup_script().get('SCRIPTID'),
}
self.api_query(
path="/v1/server/create",
method="POST",
data=data
)
server = self._wait_for_state(key='status', state='active')
server = self._wait_for_state(state='running', timeout=3600 if snapshot_restore else 60)
return server
def _update_auto_backups_setting(self, server, start_server):
auto_backup_enabled_changed = self.switch_enable_disable(server, 'auto_backup_enabled', 'auto_backups')
if auto_backup_enabled_changed:
if auto_backup_enabled_changed == "enable" and server['auto_backups'] == 'disable':
self.module.warn("Backups are disabled. Once disabled, backups can only be enabled again by customer support")
else:
server, warned = self._handle_power_status_for_update(server, start_server)
if not warned:
self.result['changed'] = True
self.result['diff']['before']['auto_backup_enabled'] = server.get('auto_backups')
self.result['diff']['after']['auto_backup_enabled'] = self.get_yes_or_no('auto_backup_enabled')
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/backup_%s" % auto_backup_enabled_changed,
method="POST",
data=data
)
return server
def _update_ipv6_setting(self, server, start_server):
ipv6_enabled_changed = self.switch_enable_disable(server, 'ipv6_enabled', 'v6_main_ip')
if ipv6_enabled_changed:
if ipv6_enabled_changed == "disable":
self.module.warn("The Vultr API does not allow to disable IPv6")
else:
server, warned = self._handle_power_status_for_update(server, start_server)
if not warned:
self.result['changed'] = True
self.result['diff']['before']['ipv6_enabled'] = False
self.result['diff']['after']['ipv6_enabled'] = True
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/ipv6_%s" % ipv6_enabled_changed,
method="POST",
data=data
)
server = self._wait_for_state(key='v6_main_ip')
return server
def _update_private_network_setting(self, server, start_server):
private_network_enabled_changed = self.switch_enable_disable(server, 'private_network_enabled', 'internal_ip')
if private_network_enabled_changed:
if private_network_enabled_changed == "disable":
self.module.warn("The Vultr API does not allow to disable private network")
else:
server, warned = self._handle_power_status_for_update(server, start_server)
if not warned:
self.result['changed'] = True
self.result['diff']['before']['private_network_enabled'] = False
self.result['diff']['after']['private_network_enabled'] = True
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/private_network_%s" % private_network_enabled_changed,
method="POST",
data=data
)
return server
def _update_plan_setting(self, server, start_server):
plan = self.get_plan()
plan_changed = True if plan and plan['VPSPLANID'] != server.get('VPSPLANID') else False
if plan_changed:
server, warned = self._handle_power_status_for_update(server, start_server)
if not warned:
self.result['changed'] = True
self.result['diff']['before']['plan'] = server.get('plan')
self.result['diff']['after']['plan'] = plan['name']
if not self.module.check_mode:
data = {
'SUBID': server['SUBID'],
'VPSPLANID': plan['VPSPLANID'],
}
self.api_query(
path="/v1/server/upgrade_plan",
method="POST",
data=data
)
return server
def _handle_power_status_for_update(self, server, start_server):
# Remember the power state before we handle any action
if self.server_power_state is None:
self.server_power_state = server['power_status']
# A stopped server can be updated
if self.server_power_state == "stopped":
return server, False
# A running server must be forced to update unless the wanted state is stopped
elif self.module.params.get('force') or not start_server:
warned = False
if not self.module.check_mode:
# Some update APIs would restart the VM, we handle the restart manually
# by stopping the server and start it at the end of the changes
server = self.stop_server(skip_results=True)
# Warn the user that a running server won't get changed
else:
warned = True
self.module.warn("Some changes won't be applied to running instances. " +
"Use force=true to allow the instance %s to be stopped/started." % server['label'])
return server, warned
def _update_server(self, server=None, start_server=True):
# Wait for server to unlock if restoring
if server.get('os').strip() == 'Snapshot':
server = self._wait_for_state(key='server_status', state='ok', timeout=3600)
# Update auto backups settings, stops server
server = self._update_auto_backups_setting(server=server, start_server=start_server)
# Update IPv6 settings, stops server
server = self._update_ipv6_setting(server=server, start_server=start_server)
# Update private network settings, stops server
server = self._update_private_network_setting(server=server, start_server=start_server)
# Update plan settings, stops server
server = self._update_plan_setting(server=server, start_server=start_server)
# User data
user_data = self.get_user_data()
server_user_data = self.get_server_user_data(server=server)
if user_data is not None and user_data != server_user_data:
self.result['changed'] = True
self.result['diff']['before']['user_data'] = server_user_data
self.result['diff']['after']['user_data'] = user_data
if not self.module.check_mode:
data = {
'SUBID': server['SUBID'],
'userdata': user_data,
}
self.api_query(
path="/v1/server/set_user_data",
method="POST",
data=data
)
# Tags
tag = self.module.params.get('tag')
if tag is not None and tag != server.get('tag'):
self.result['changed'] = True
self.result['diff']['before']['tag'] = server.get('tag')
self.result['diff']['after']['tag'] = tag
if not self.module.check_mode:
data = {
'SUBID': server['SUBID'],
'tag': tag,
}
self.api_query(
path="/v1/server/tag_set",
method="POST",
data=data
)
# Firewall group
firewall_group = self.get_firewall_group()
if firewall_group and firewall_group.get('description') != server.get('firewall_group'):
self.result['changed'] = True
self.result['diff']['before']['firewall_group'] = server.get('firewall_group')
self.result['diff']['after']['firewall_group'] = firewall_group.get('description')
if not self.module.check_mode:
data = {
'SUBID': server['SUBID'],
'FIREWALLGROUPID': firewall_group.get('FIREWALLGROUPID'),
}
self.api_query(
path="/v1/server/firewall_group_set",
method="POST",
data=data
)
# Start server again if it was running before the changes
if not self.module.check_mode:
if self.server_power_state in ['starting', 'running'] and start_server:
server = self.start_server(skip_results=True)
server = self._wait_for_state(key='status', state='active')
return server
def absent_server(self):
server = self.get_server()
if server:
self.result['changed'] = True
self.result['diff']['before']['id'] = server['SUBID']
self.result['diff']['after']['id'] = ""
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/destroy",
method="POST",
data=data
)
for s in range(0, 60):
if server is not None:
break
time.sleep(2)
server = self.get_server(refresh=True)
else:
self.fail_json(msg="Wait for server '%s' to get deleted timed out" % server['label'])
return server
def restart_server(self):
self.result['changed'] = True
server = self.get_server()
if server:
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/reboot",
method="POST",
data=data
)
server = self._wait_for_state(state='running')
return server
def reinstall_server(self):
self.result['changed'] = True
server = self.get_server()
if server:
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/reinstall",
method="POST",
data=data
)
server = self._wait_for_state(state='running')
return server
def _wait_for_state(self, key='power_status', state=None, timeout=60):
time.sleep(1)
server = self.get_server(refresh=True)
for s in range(0, timeout):
# Check for Truely if wanted state is None
if state is None and server.get(key):
break
elif server.get(key) == state:
break
time.sleep(2)
server = self.get_server(refresh=True)
# Timed out
else:
if state is None:
msg = "Wait for '%s' timed out" % key
else:
msg = "Wait for '%s' to get into state '%s' timed out" % (key, state)
self.fail_json(msg=msg)
return server
def start_server(self, skip_results=False):
server = self.get_server()
if server:
if server['power_status'] == 'starting':
server = self._wait_for_state(state='running')
elif server['power_status'] != 'running':
if not skip_results:
self.result['changed'] = True
self.result['diff']['before']['power_status'] = server['power_status']
self.result['diff']['after']['power_status'] = "running"
if not self.module.check_mode:
data = {
'SUBID': server['SUBID']
}
self.api_query(
path="/v1/server/start",
method="POST",
data=data
)
server = self._wait_for_state(state='running')
return server
def stop_server(self, skip_results=False):
server = self.get_server()
if server and server['power_status'] != "stopped":
if not skip_results:
self.result['changed'] = True
self.result['diff']['before']['power_status'] = server['power_status']
self.result['diff']['after']['power_status'] = "stopped"
if not self.module.check_mode:
data = {
'SUBID': server['SUBID'],
}
self.api_query(
path="/v1/server/halt",
method="POST",
data=data
)
server = self._wait_for_state(state='stopped')
return server
def main():
argument_spec = vultr_argument_spec()
argument_spec.update(dict(
name=dict(required=True, aliases=['label']),
hostname=dict(type='str'),
os=dict(type='str'),
snapshot=dict(type='str'),
plan=dict(type='str'),
force=dict(type='bool', default=False),
notify_activate=dict(type='bool', default=False),
private_network_enabled=dict(type='bool'),
auto_backup_enabled=dict(type='bool'),
ipv6_enabled=dict(type='bool'),
tag=dict(type='str'),
reserved_ip_v4=dict(type='str'),
firewall_group=dict(type='str'),
startup_script=dict(type='str'),
user_data=dict(type='str'),
ssh_keys=dict(type='list', aliases=['ssh_key']),
region=dict(type='str'),
state=dict(choices=['present', 'absent', 'restarted', 'reinstalled', 'started', 'stopped'], default='present'),
))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
vultr_server = AnsibleVultrServer(module)
if module.params.get('state') == "absent":
server = vultr_server.absent_server()
else:
if module.params.get('state') == "started":
server = vultr_server.present_server()
server = vultr_server.start_server()
elif module.params.get('state') == "stopped":
server = vultr_server.present_server(start_server=False)
server = vultr_server.stop_server()
elif module.params.get('state') == "restarted":
server = vultr_server.present_server()
server = vultr_server.restart_server()
elif module.params.get('state') == "reinstalled":
server = vultr_server.reinstall_server()
else:
server = vultr_server.present_server()
result = vultr_server.get_result(server)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,707 |
vultr_server_info fails with "Could not find plans with VPSPLANID: 200"
|
<!--- Verify first that your issue is not already reported on GitHub -->
Not found.
<!--- Also test if the latest release and devel branch are affected too -->
Apologies, I'm not in a position to do that now.
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
vultr_server_info crashes for a server using a no-longer-available plan.
I have a vultr server purchased as special offer. That offer is no longer available (boo). It uses plan 200.
I suggest vultr_server_info should still return other server data when it cannot find information out about a plan.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vultr_server_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/dylan/ansible/ansible.cfg
configured module search path = ['/home/dylan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Oct 11 2019, 11:15:58) [Clang 8.0.1 (tags/RELEASE_801/final)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_ARGS(/home/dylan/ansible/ansible.cfg) = -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30
DEFAULT_BECOME_METHOD(/home/dylan/ansible/ansible.cfg) = doas
DEFAULT_HOST_LIST(/home/dylan/ansible/ansible.cfg) = ['/home/dylan/ansible/inventory']
DEFAULT_VAULT_PASSWORD_FILE(/home/dylan/ansible/ansible.cfg) = /home/dylan/.vault_password
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OpenBSD 6.6 fully patched (master and target). Ansible acquired from pip/pip3. vultr_server_info actually run on target , which is the vultr server in question. Vultr can provide all the info about the vm hardware, etc., I hope.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Ansible-playbook running script including yml below. Clearly, one needs a vultr account!
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Gather Vultr servers information
vultr_server_info:
api_key: "{{ vultr_api }}"
register: result
- name: Print the gathered information
debug:
var: result.vultr_server_info
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect to get information about my vultr server(s).
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
task path: /home/dylan/ansible/tasks/vultr/info.yml:123
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<deaddog.example.com> (0, b'/home/ansible\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" && echo ansible-tmp-1579766233.5392485-31526552794438="` echo /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438 `" ) && sleep 0'"'"''
<deaddog.example.com> (0, b'ansible-tmp-1579766233.5392485-31526552794438=/home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438\n', b'')
Using module file /usr/local/lib/python3.7/site-packages/ansible/modules/cloud/vultr/vultr_server_info.py
<deaddog.example.com> PUT /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 TO /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py
<deaddog.example.com> SSH: EXEC sftp -b - -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed '[deaddog.example.com]'
<deaddog.example.com> (0, b'sftp> put /home/dylan/.ansible/tmp/ansible-local-445881sc7e_7k/tmp6l65zgg5 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py\n', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed -tt deaddog.example.com '/bin/sh -c '"'"'doas /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-kyusbotingfhthztzyzhtjhwrwcuxdvz ; /usr/local/bin/python3 /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/AnsiballZ_vultr_server_info.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<deaddog.example.com> (1, b'\r\r\n\r\n{"msg": "Could not find plans with VPSPLANID: 200", "failed": true, "invocation": {"module_args": {"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "api_account": "default", "validate_certs": true, "api_timeout": null, "api_retries": null, "api_retry_max_delay": null, "api_endpoint": null}}}\r\n', b'Connection to deaddog.example.com closed.\r\n')
<deaddog.example.com> Failed to connect to the host via ssh: Connection to deaddog.example.com closed.
<deaddog.example.com> ESTABLISH SSH CONNECTION FOR USER: ansible
<deaddog.example.com> SSH: EXEC ssh -C -o ControlMaster=no -o ControlPersist=60s -o ServerAliveInterval=30 -o Port=9¾ -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/dylan/.ansible/cp/0a5f5743ed deaddog.example.com '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1579766233.5392485-31526552794438/ > /dev/null 2>&1 && sleep 0'"'"''
<deaddog.example.com> (0, b'', b'')
fatal: [butterfly]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_account": "default",
"api_endpoint": null,
"api_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"api_retries": null,
"api_retry_max_delay": null,
"api_timeout": null,
"validate_certs": true
}
},
"msg": "Could not find plans with VPSPLANID: 200"
}
[[NOTE: identifying information may have been nibbled ]]
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Gather Vultr servers information] ******************************************************************************************************************************************************
fatal: [butterfly]: FAILED! => {"changed": false, "msg": "Could not find plans with VPSPLANID: 200"}
```
|
https://github.com/ansible/ansible/issues/66707
|
https://github.com/ansible/ansible/pull/66792
|
2dc9841806499810f55c8284bef3d8206ccb20ee
|
78e666dd39e76c99e2c6d52a07cfa5cba175114a
| 2020-01-23T08:07:50Z |
python
| 2020-01-28T09:46:17Z |
lib/ansible/modules/cloud/vultr/vultr_server_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (c) 2018, Yanis Guenane <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vultr_server_info
short_description: Gather information about the Vultr servers available.
description:
- Gather information about servers available.
version_added: "2.9"
author: "Yanis Guenane (@Spredzy)"
extends_documentation_fragment: vultr
'''
EXAMPLES = r'''
- name: Gather Vultr servers information
local_action:
module: vultr_server_info
register: result
- name: Print the gathered information
debug:
var: result.vultr_server_info
'''
RETURN = r'''
---
vultr_api:
description: Response from Vultr API with a few additions/modification
returned: success
type: complex
contains:
api_account:
description: Account used in the ini file to select the key
returned: success
type: str
sample: default
api_timeout:
description: Timeout used for the API requests
returned: success
type: int
sample: 60
api_retries:
description: Amount of max retries for the API requests
returned: success
type: int
sample: 5
api_retry_max_delay:
description: Exponential backoff delay in seconds between retries up to this max delay value.
returned: success
type: int
sample: 12
version_added: '2.9'
api_endpoint:
description: Endpoint used for the API requests
returned: success
type: str
sample: "https://api.vultr.com"
vultr_server_info:
description: Response from Vultr API
returned: success
type: complex
sample:
"vultr_server_info": [
{
"allowed_bandwidth_gb": 1000,
"auto_backup_enabled": false,
"application": null,
"cost_per_month": 5.00,
"current_bandwidth_gb": 0,
"date_created": "2018-07-19 08:23:03",
"default_password": "p4ssw0rd!",
"disk": "Virtual 25 GB",
"firewallgroup": null,
"id": 17241096,
"internal_ip": "",
"kvm_url": "https://my.vultr.com/subs/vps/novnc/api.php?data=OFB...",
"name": "ansibletest",
"os": "CentOS 7 x64",
"pending_charges": 0.01,
"plan": "1024 MB RAM,25 GB SSD,1.00 TB BW",
"power_status": "running",
"ram": "1024 MB",
"region": "Amsterdam",
"server_state": "ok",
"status": "active",
"tag": "",
"v4_gateway": "105.178.158.1",
"v4_main_ip": "105.178.158.181",
"v4_netmask": "255.255.254.0",
"v6_main_ip": "",
"v6_network": "",
"v6_network_size": "",
"v6_networks": [],
"vcpu_count": 1
}
]
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vultr import (
Vultr,
vultr_argument_spec,
)
class AnsibleVultrServerInfo(Vultr):
def __init__(self, module):
super(AnsibleVultrServerInfo, self).__init__(module, "vultr_server_info")
self.returns = {
"APPID": dict(key='application', convert_to='int', transform=self._get_application_name),
"FIREWALLGROUPID": dict(key='firewallgroup', transform=self._get_firewallgroup_name),
"SUBID": dict(key='id', convert_to='int'),
"VPSPLANID": dict(key='plan', convert_to='int', transform=self._get_plan_name),
"allowed_bandwidth_gb": dict(convert_to='int'),
'auto_backups': dict(key='auto_backup_enabled', convert_to='bool'),
"cost_per_month": dict(convert_to='float'),
"current_bandwidth_gb": dict(convert_to='float'),
"date_created": dict(),
"default_password": dict(),
"disk": dict(),
"gateway_v4": dict(key='v4_gateway'),
"internal_ip": dict(),
"kvm_url": dict(),
"label": dict(key='name'),
"location": dict(key='region'),
"main_ip": dict(key='v4_main_ip'),
"netmask_v4": dict(key='v4_netmask'),
"os": dict(),
"pending_charges": dict(convert_to='float'),
"power_status": dict(),
"ram": dict(),
"server_state": dict(),
"status": dict(),
"tag": dict(),
"v6_main_ip": dict(),
"v6_network": dict(),
"v6_network_size": dict(),
"v6_networks": dict(),
"vcpu_count": dict(convert_to='int'),
}
def _get_application_name(self, application):
if application == 0:
return None
return self.get_application(application, 'APPID').get('name')
def _get_firewallgroup_name(self, firewallgroup):
if firewallgroup == 0:
return None
return self.get_firewallgroup(firewallgroup, 'FIREWALLGROUPID').get('description')
def _get_plan_name(self, plan):
return self.get_plan(plan, 'VPSPLANID').get('name')
def get_servers(self):
return self.api_query(path="/v1/server/list")
def parse_servers_list(servers_list):
return [server for id, server in servers_list.items()]
def main():
argument_spec = vultr_argument_spec()
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
server_info = AnsibleVultrServerInfo(module)
result = server_info.get_result(parse_servers_list(server_info.get_servers()))
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,891 |
get_url contains deprecated call to be removed in 2.10
|
##### SUMMARY
get_url contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/net_tools/basics/get_url.py:478:12: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/net_tools/basics/get_url.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61891
|
https://github.com/ansible/ansible/pull/66649
|
abc8b0ae73af6c7d303c0bd25cc9ad6b5bba1d3a
|
365f2aaed1e628414898ec7106da272c245c52a2
| 2019-09-05T20:41:12Z |
python
| 2020-01-28T15:39:40Z |
changelogs/fragments/61891-get_url-remove-deprecated-string-headers.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,891 |
get_url contains deprecated call to be removed in 2.10
|
##### SUMMARY
get_url contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/net_tools/basics/get_url.py:478:12: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/net_tools/basics/get_url.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61891
|
https://github.com/ansible/ansible/pull/66649
|
abc8b0ae73af6c7d303c0bd25cc9ad6b5bba1d3a
|
365f2aaed1e628414898ec7106da272c245c52a2
| 2019-09-05T20:41:12Z |
python
| 2020-01-28T15:39:40Z |
lib/ansible/modules/net_tools/basics/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated and will be removed in version 2.14. Use
option C(checksum) instead.
default: ''
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and will be removed in version 2.10.
type: raw
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(win_get_url) module instead.
seealso:
- module: uri
- module: win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
if module.check_mode:
method = 'HEAD'
else:
method = 'GET'
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool'),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='raw'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead', version='2.13')
if module.params.get('sha256sum'):
module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead', version='2.14')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
# Parse headers to dict
if isinstance(module.params['headers'], dict):
headers = module.params['headers']
elif module.params['headers']:
try:
headers = dict(item.split(':', 1) for item in module.params['headers'].split(','))
module.deprecate('Supplying `headers` as a string is deprecated. Please use dict/hash format for `headers`', version='2.10')
except Exception:
module.fail_json(msg="The string representation for the `headers` parameter requires a key:value,key:value syntax to be properly parsed.", **result)
else:
headers = None
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if checksum.startswith('http://') or checksum.startswith('https://') or checksum.startswith('ftp://'):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = {}
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map[parts[0]] = parts[1]
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map.items() if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = dest
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = dest
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,891 |
get_url contains deprecated call to be removed in 2.10
|
##### SUMMARY
get_url contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/net_tools/basics/get_url.py:478:12: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/net_tools/basics/get_url.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61891
|
https://github.com/ansible/ansible/pull/66649
|
abc8b0ae73af6c7d303c0bd25cc9ad6b5bba1d3a
|
365f2aaed1e628414898ec7106da272c245c52a2
| 2019-09-05T20:41:12Z |
python
| 2020-01-28T15:39:40Z |
test/integration/targets/get_url/tasks/main.yml
|
# Test code for the get_url module
# (c) 2014, Richard Isaacson <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
- name: Determine if python looks like it will support modern ssl features like SNI
command: "{{ ansible_python.executable }} -c 'from ssl import SSLContext'"
ignore_errors: True
register: python_test
- name: Set python_has_sslcontext if we have it
set_fact:
python_has_ssl_context: True
when: python_test.rc == 0
- name: Set python_has_sslcontext False if we don't have it
set_fact:
python_has_ssl_context: False
when: python_test.rc != 0
- name: Define test files for file schema
set_fact:
geturl_srcfile: "{{ remote_tmp_dir }}/aurlfile.txt"
geturl_dstfile: "{{ remote_tmp_dir }}/aurlfile_copy.txt"
- name: Create source file
copy:
dest: "{{ geturl_srcfile }}"
content: "foobar"
register: source_file_copied
- name: test file fetch
get_url:
url: "file://{{ source_file_copied.dest }}"
dest: "{{ geturl_dstfile }}"
register: result
- name: assert success and change
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test nonexisting file fetch
get_url:
url: "file://{{ source_file_copied.dest }}NOFILE"
dest: "{{ geturl_dstfile }}NOFILE"
register: result
ignore_errors: True
- name: assert success and change
assert:
that:
- result is failed
- name: test HTTP HEAD request for file in check mode
get_url:
url: "https://{{ httpbin_host }}/get"
dest: "{{ remote_tmp_dir }}/get_url_check.txt"
force: yes
check_mode: True
register: result
- name: assert that the HEAD request was successful in check mode
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test HTTP HEAD for nonexistent URL in check mode
get_url:
url: "https://{{ httpbin_host }}/DOESNOTEXIST"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
force: yes
check_mode: True
register: result
ignore_errors: True
- name: assert that HEAD request for nonexistent URL failed
assert:
that:
- result is failed
- name: test https fetch
get_url: url="https://{{ httpbin_host }}/get" dest={{remote_tmp_dir}}/get_url.txt force=yes
register: result
- name: assert the get_url call was successful
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test https fetch to a site with mismatched hostname and certificate
get_url:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
ignore_errors: True
register: result
- stat:
path: "{{ remote_tmp_dir }}/shouldnotexist.html"
register: stat_result
- name: Assert that the file was not downloaded
assert:
that:
- "result is failed"
- "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or ( result.msg is match('hostname .* doesn.t match .*'))"
- "stat_result.stat.exists == false"
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
get_url:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/get_url_no_validate.html"
validate_certs: no
register: result
- stat:
path: "{{ remote_tmp_dir }}/get_url_no_validate.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- result is changed
- "stat_result.stat.exists == true"
# SNI Tests
# SNI is only built into the stdlib from python-2.7.9 onwards
- name: Test that SNI works
get_url:
url: 'https://{{ sni_host }}/'
dest: "{{ remote_tmp_dir }}/sni.html"
register: get_url_result
ignore_errors: True
- command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html"
register: data_result
when: python_has_ssl_context
- debug:
var: get_url_result
- name: Assert that SNI works with this python version
assert:
that:
- 'data_result.rc == 0'
when: python_has_ssl_context
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
- name: Assert that hostname verification failed because SNI is not supported on this version of python
assert:
that:
- 'get_url_result is failed'
when: not python_has_ssl_context
# These tests are just side effects of how the site is hosted. It's not
# specifically a test site. So the tests may break due to the hosting changing
- name: Test that SNI works
get_url:
url: 'https://{{ sni_host }}/'
dest: "{{ remote_tmp_dir }}/sni.html"
register: get_url_result
ignore_errors: True
- command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html"
register: data_result
when: python_has_ssl_context
- debug:
var: get_url_result
- name: Assert that SNI works with this python version
assert:
that:
- 'data_result.rc == 0'
- 'get_url_result is not failed'
when: python_has_ssl_context
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
- name: Assert that hostname verification failed because SNI is not supported on this version of python
assert:
that:
- 'get_url_result is failed'
when: not python_has_ssl_context
# End hacky SNI test section
- name: Test get_url with redirect
get_url:
url: 'https://{{ httpbin_host }}/redirect/6'
dest: "{{ remote_tmp_dir }}/redirect.json"
- name: Test that setting file modes work
get_url:
url: 'https://{{ httpbin_host }}/'
dest: '{{ remote_tmp_dir }}/test'
mode: '0707'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that the file has the right permissions
assert:
that:
- result is changed
- "stat_result.stat.mode == '0707'"
- name: Test that setting file modes on an already downloaded file work
get_url:
url: 'https://{{ httpbin_host }}/'
dest: '{{ remote_tmp_dir }}/test'
mode: '0070'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that the file has the right permissions
assert:
that:
- result is changed
- "stat_result.stat.mode == '0070'"
# https://github.com/ansible/ansible/pull/65307/
- name: Test that on http status 304, we get a status_code field.
get_url:
url: 'https://{{ httpbin_host }}/status/304'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we get the appropriate status_code
assert:
that:
- "'status_code' in result"
- "result.status_code == 304"
# https://github.com/ansible/ansible/issues/29614
- name: Change mode on an already downloaded file and specify checksum
get_url:
url: 'https://{{ httpbin_host }}/get'
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha256:7036ede810fad2b5d2e7547ec703cae8da61edbba43c23f9d7203a0239b765c4.'
mode: '0775'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that file permissions on already downloaded file were changed
assert:
that:
- result is changed
- "stat_result.stat.mode == '0775'"
- name: test checksum match in check mode
get_url:
url: 'https://{{ httpbin_host }}/get'
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha256:7036ede810fad2b5d2e7547ec703cae8da61edbba43c23f9d7203a0239b765c4.'
check_mode: True
register: result
- name: Assert that check mode was green
assert:
that:
- result is not changed
- name: Get a file that already exists with a checksum
get_url:
url: 'https://{{ httpbin_host }}/cache'
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha1:{{ stat_result.stat.checksum }}'
register: result
- name: Assert that the file was not downloaded
assert:
that:
- result.msg == 'file already exists'
- name: Get a file that already exists
get_url:
url: 'https://{{ httpbin_host }}/cache'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we didn't re-download unnecessarily
assert:
that:
- result is not changed
- "'304' in result.msg"
- name: get a file that doesn't respond to If-Modified-Since without checksum
get_url:
url: 'https://{{ httpbin_host }}/get'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we downloaded the file
assert:
that:
- result is changed
# https://github.com/ansible/ansible/issues/27617
- name: set role facts
set_fact:
http_port: 27617
files_dir: '{{ remote_tmp_dir }}/files'
- name: create files_dir
file:
dest: "{{ files_dir }}"
state: directory
- name: create src file
copy:
dest: '{{ files_dir }}/27617.txt'
content: "ptux"
- name: create sha1 checksum file of src
copy:
dest: '{{ files_dir }}/sha1sum.txt'
content: |
a97e6837f60cec6da4491bab387296bbcd72bdba 27617.txt
3911340502960ca33aece01129234460bfeb2791 not_target1.txt
1b4b6adf30992cedb0f6edefd6478ff0a593b2e4 not_target2.txt
- name: create sha256 checksum file of src
copy:
dest: '{{ files_dir }}/sha256sum.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. 27617.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b not_target2.txt
- name: create sha256 checksum file of src with a dot leading path
copy:
dest: '{{ files_dir }}/sha256sum_with_dot.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. ./27617.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 ./not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b ./not_target2.txt
- copy:
src: "testserver.py"
dest: "{{ remote_tmp_dir }}/testserver.py"
- name: start SimpleHTTPServer for issues 27617
shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ remote_tmp_dir}}/testserver.py {{ http_port }}
async: 90
poll: 0
- name: download src with sha1 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}'
checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt'
register: result_sha1
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha1
- name: download src with sha256 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum.txt'
register: result_sha256
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha256
- name: download src with sha256 checksum url with dot leading paths
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_dot.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_dot.txt'
register: result_sha256_with_dot
- stat:
path: "{{ remote_tmp_dir }}/27617sha256_with_dot.txt"
register: stat_result_sha256_with_dot
- name: Assert that the file was downloaded
assert:
that:
- result_sha1 is changed
- result_sha256 is changed
- result_sha256_with_dot is changed
- "stat_result_sha1.stat.exists == true"
- "stat_result_sha256.stat.exists == true"
- "stat_result_sha256_with_dot.stat.exists == true"
#https://github.com/ansible/ansible/issues/16191
- name: Test url split with no filename
get_url:
url: https://{{ httpbin_host }}
dest: "{{ remote_tmp_dir }}"
- name: Test headers string
get_url:
url: https://{{ httpbin_host }}/headers
headers: Foo:bar,Baz:qux
dest: "{{ remote_tmp_dir }}/headers_string.json"
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/headers_string.json"
register: result
- name: Test headers string
assert:
that:
- (result.content | b64decode | from_json).headers.get('Foo') == 'bar'
- (result.content | b64decode | from_json).headers.get('Baz') == 'qux'
- name: Test headers string invalid format
get_url:
url: https://{{ httpbin_host }}/headers
headers: Foo
dest: "{{ remote_tmp_dir }}/headers_string_invalid.json"
register: invalid_string_headers
failed_when:
- invalid_string_headers is successful
- invalid_string_headers.msg != "The string representation for the `headers` parameter requires a key:value,key:value syntax to be properly parsed."
- name: Test headers dict
get_url:
url: https://{{ httpbin_host }}/headers
headers:
Foo: bar
Baz: qux
dest: "{{ remote_tmp_dir }}/headers_dict.json"
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/headers_dict.json"
register: result
- name: Test headers dict
assert:
that:
- (result.content | b64decode | from_json).headers.get('Foo') == 'bar'
- (result.content | b64decode | from_json).headers.get('Baz') == 'qux'
- name: Test client cert auth, with certs
get_url:
url: "https://ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
dest: "{{ remote_tmp_dir }}/ssl_client_verify"
when: has_httptester
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/ssl_client_verify"
register: result
when: has_httptester
- name: Assert that the ssl_client_verify file contains the correct content
assert:
that:
- '(result.content | b64decode) == "ansible.http.tests:SUCCESS"'
when: has_httptester
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,858 |
Unable to create new logical network on oVirt
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Unable to create new logical network on oVirt
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ovirt_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
master
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
tasks:
- ovirt_auth:
state: present
username: admin@internal
password: '123456'
url: https://ovirt-master.virt/ovirt-engine/api
insecure: true
- ovirt_network:
auth: "{{ ovirt_auth }}"
data_center: Default
name: network_123
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
network is created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ovirt_network_payload_32f6rvf8/ansible_ovirt_network_payload.zip/ansible/modules/cloud/ovirt/ovirt_network.py", line 327, in main
File "/tmp/ansible_ovirt_network_payload_32f6rvf8/ansible_ovirt_network_payload.zip/ansible/module_utils/ovirt.py", line 620, in create
self.build_entity(),
File "/tmp/ansible_ovirt_network_payload_32f6rvf8/ansible_ovirt_network_payload.zip/ansible/modules/cloud/ovirt/ovirt_network.py", line 175, in build_entity
File "/tmp/ansible_ovirt_network_payload_32f6rvf8/ansible_ovirt_network_payload.zip/ansible/module_utils/ovirt.py", line 327, in get_id_by_name
raise Exception("Entity '%s' was not found.".format(service, name, entity))
Exception: Entity '%s' was not found.
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"clusters": null,
"comment": null,
"data_center": "Default",
"description": null,
"external_provider": null,
"fetch_nested": false,
"id": null,
"label": null,
"mtu": null,
"name": "network_123",
"nested_attributes": [],
"poll_interval": 3,
"state": "present",
"timeout": 180,
"vlan_tag": null,
"vm_network": null,
"wait": true
}
},
"msg": "Entity '%s' was not found."
}
PLAY RECAP *****************************************************************************************************************************************************************$
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66858
|
https://github.com/ansible/ansible/pull/66859
|
d385a648c456f14e379e53d4545dbbc6be1ae9e9
|
b74ca2fe4f8818a17f093ea083fe30263d6dfbdb
| 2020-01-28T15:12:31Z |
python
| 2020-01-28T16:08:46Z |
lib/ansible/modules/cloud/ovirt/ovirt_network.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_network
short_description: Module to manage logical networks in oVirt/RHV
version_added: "2.3"
author: "Ondra Machacek (@machacekondra)"
description:
- "Module to manage logical networks in oVirt/RHV"
options:
id:
description:
- "ID of the network to manage."
version_added: "2.8"
name:
description:
- "Name of the network to manage."
required: true
state:
description:
- "Should the network be present or absent"
choices: ['present', 'absent']
default: present
data_center:
description:
- "Datacenter name where network reside."
description:
description:
- "Description of the network."
comment:
description:
- "Comment of the network."
vlan_tag:
description:
- "Specify VLAN tag."
external_provider:
description:
- "Name of external network provider."
- "At first it tries to import the network when not found it will create network in external provider."
version_added: 2.8
vm_network:
description:
- "If I(True) network will be marked as network for VM."
- "VM network carries traffic relevant to the virtual machine."
type: bool
mtu:
description:
- "Maximum transmission unit (MTU) of the network."
clusters:
description:
- "List of dictionaries describing how the network is managed in specific cluster."
suboptions:
name:
description:
- Cluster name.
assigned:
description:
- I(true) if the network should be assigned to cluster. Default is I(true).
type: bool
required:
description:
- I(true) if the network must remain operational for all hosts associated with this network.
type: bool
display:
description:
- I(true) if the network should marked as display network.
type: bool
migration:
description:
- I(true) if the network should marked as migration network.
type: bool
gluster:
description:
- I(true) if the network should marked as gluster network.
type: bool
label:
description:
- "Name of the label to assign to the network."
version_added: "2.5"
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Create network
- ovirt_network:
data_center: mydatacenter
name: mynetwork
vlan_tag: 1
vm_network: true
# Remove network
- ovirt_network:
state: absent
name: mynetwork
# Change Network Name
- ovirt_network:
id: 00000000-0000-0000-0000-000000000000
name: "new_network_name"
data_center: mydatacenter
# Add network from external provider
- ovirt_network:
data_center: mydatacenter
name: mynetwork
external_provider: ovirt-provider-ovn
'''
RETURN = '''
id:
description: "ID of the managed network"
returned: "On success if network is found."
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
network:
description: "Dictionary of all the network attributes. Network attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/network."
returned: "On success if network is found."
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_sdk,
check_params,
create_connection,
equal,
ovirt_full_argument_spec,
search_by_name,
get_id_by_name,
get_dict_of_struct,
get_entity
)
class NetworksModule(BaseModule):
def build_entity(self):
ons_service = self._connection.system_service().openstack_network_providers_service()
on_service = ons_service.provider_service(get_id_by_name(ons_service, self.param('external_provider')))
return otypes.Network(
name=self._module.params['name'],
comment=self._module.params['comment'],
description=self._module.params['description'],
id=self._module.params['id'],
data_center=otypes.DataCenter(
name=self._module.params['data_center'],
) if self._module.params['data_center'] else None,
vlan=otypes.Vlan(
self._module.params['vlan_tag'],
) if self._module.params['vlan_tag'] else None,
usages=[
otypes.NetworkUsage.VM if self._module.params['vm_network'] else None
] if self._module.params['vm_network'] is not None else None,
mtu=self._module.params['mtu'],
external_provider=otypes.OpenStackNetworkProvider(id=on_service.get().id)
if self.param('external_provider') else None,
)
def post_create(self, entity):
self._update_label_assignments(entity)
def _update_label_assignments(self, entity):
if self.param('label') is None:
return
labels_service = self._service.service(entity.id).network_labels_service()
labels = [lbl.id for lbl in labels_service.list()]
if not self.param('label') in labels:
if not self._module.check_mode:
if labels:
labels_service.label_service(labels[0]).remove()
labels_service.add(
label=otypes.NetworkLabel(id=self.param('label'))
)
self.changed = True
def update_check(self, entity):
self._update_label_assignments(entity)
return (
equal(self._module.params.get('comment'), entity.comment) and
equal(self._module.params.get('name'), entity.name) and
equal(self._module.params.get('description'), entity.description) and
equal(self._module.params.get('vlan_tag'), getattr(entity.vlan, 'id', None)) and
equal(self._module.params.get('vm_network'), True if entity.usages else False) and
equal(self._module.params.get('mtu'), entity.mtu)
)
class ClusterNetworksModule(BaseModule):
def __init__(self, network_id, cluster_network, *args, **kwargs):
super(ClusterNetworksModule, self).__init__(*args, **kwargs)
self._network_id = network_id
self._cluster_network = cluster_network
self._old_usages = []
self._cluster_network_entity = get_entity(self._service.network_service(network_id))
if self._cluster_network_entity is not None:
self._old_usages = self._cluster_network_entity.usages
def build_entity(self):
return otypes.Network(
id=self._network_id,
name=self._module.params['name'],
required=self._cluster_network.get('required'),
display=self._cluster_network.get('display'),
usages=list(set([
otypes.NetworkUsage(usage)
for usage in ['display', 'gluster', 'migration']
if self._cluster_network.get(usage, False)
] + self._old_usages))
if (
self._cluster_network.get('display') is not None or
self._cluster_network.get('gluster') is not None or
self._cluster_network.get('migration') is not None
) else None,
)
def update_check(self, entity):
return (
equal(self._cluster_network.get('required'), entity.required) and
equal(self._cluster_network.get('display'), entity.display) and
all(
x in [
str(usage)
for usage in getattr(entity, 'usages', [])
# VM + MANAGEMENT is part of root network
if usage != otypes.NetworkUsage.VM and usage != otypes.NetworkUsage.MANAGEMENT
]
for x in [
usage
for usage in ['display', 'gluster', 'migration']
if self._cluster_network.get(usage, False)
]
)
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(
choices=['present', 'absent'],
default='present',
),
data_center=dict(required=True),
id=dict(default=None),
name=dict(required=True),
description=dict(default=None),
comment=dict(default=None),
external_provider=dict(default=None),
vlan_tag=dict(default=None, type='int'),
vm_network=dict(default=None, type='bool'),
mtu=dict(default=None, type='int'),
clusters=dict(default=None, type='list'),
label=dict(default=None),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
check_sdk(module)
check_params(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
clusters_service = connection.system_service().clusters_service()
networks_service = connection.system_service().networks_service()
networks_module = NetworksModule(
connection=connection,
module=module,
service=networks_service,
)
state = module.params['state']
search_params = {
'name': module.params['name'],
'datacenter': module.params['data_center'],
}
if state == 'present':
imported = False
if module.params.get('external_provider') and module.params.get('name') not in [net.name for net in networks_service.list()]:
# Try to import network
ons_service = connection.system_service().openstack_network_providers_service()
on_service = ons_service.provider_service(get_id_by_name(ons_service, module.params.get('external_provider')))
on_networks_service = on_service.networks_service()
if module.params.get('name') in [net.name for net in on_networks_service.list()]:
network_service = on_networks_service.network_service(get_id_by_name(on_networks_service, module.params.get('name')))
network_service.import_(data_center=otypes.DataCenter(name=module.params.get('data_center')))
imported = True
ret = networks_module.create(search_params=search_params)
ret['changed'] = ret['changed'] or imported
# Update clusters networks:
if module.params.get('clusters') is not None:
for param_cluster in module.params.get('clusters'):
cluster = search_by_name(clusters_service, param_cluster.get('name'))
if cluster is None:
raise Exception("Cluster '%s' was not found." % param_cluster.get('name'))
cluster_networks_service = clusters_service.service(cluster.id).networks_service()
cluster_networks_module = ClusterNetworksModule(
network_id=ret['id'],
cluster_network=param_cluster,
connection=connection,
module=module,
service=cluster_networks_service,
)
if param_cluster.get('assigned', True):
ret = cluster_networks_module.create()
else:
ret = cluster_networks_module.remove()
elif state == 'absent':
ret = networks_module.remove(search_params=search_params)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,633 |
[linux][2.8] include_vars fails when used as standalone command with dir
|
##### SUMMARY
when the following command is run
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
it errors with
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
if i edit `/usr/lib/python2.7/site-packages/ansible/plugins/action/include_vars.py` and comment out line `100`
```
results = dict()
if self.source_dir:
self._set_dir_defaults()
#self._set_root_dir()
if not path.exists(self.source_dir):
```
and rerun it returns the values contained within that directory.
I am using `molecule` to test the ansible role and have a workaround within `conftest.py` as follows
```
import pytest
@pytest.fixture(scope='module')
def ansible_vars(request, host):
defaults_files1 = "file=../../defaults/main/common.yml name=role_defaults"
defaults_files2 = "file=../../defaults/main/directory.yml name=role_defaults"
defaults_files3 = "file=../../defaults/main/proxy.yml name=role_defaults"
defaults_files4 = "file=../../defaults/main/replication.yml name=role_defaults"
vars_files = "file=../../vars/main.yml name=role_vars"
ansible_vars = host.ansible(
"include_vars",
defaults_files1)["ansible_facts"]["role_defaults"]
ansible_vars.update(host.ansible(
"include_vars",
defaults_files2)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files3)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files4)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
vars_files)["ansible_facts"]["role_vars"])
return ansible_vars
```
but would prefer to use the correct command module to do so.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`include_vars`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
`Return nothing`
##### OS / ENVIRONMENT
Linux, Centos 7.6
##### STEPS TO REPRODUCE
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
##### EXPECTED RESULTS
It returns all the values defined in all the files within the directory
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/62633
|
https://github.com/ansible/ansible/pull/66581
|
3f16752ed2da6348f290d32f72afcc04f6061927
|
cc2376b782a0c423566f5f42968c5003387150ae
| 2019-09-20T04:25:36Z |
python
| 2020-01-28T16:50:34Z |
changelogs/fragments/include_vars-ad-hoc-stack-trace-fix.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,633 |
[linux][2.8] include_vars fails when used as standalone command with dir
|
##### SUMMARY
when the following command is run
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
it errors with
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
if i edit `/usr/lib/python2.7/site-packages/ansible/plugins/action/include_vars.py` and comment out line `100`
```
results = dict()
if self.source_dir:
self._set_dir_defaults()
#self._set_root_dir()
if not path.exists(self.source_dir):
```
and rerun it returns the values contained within that directory.
I am using `molecule` to test the ansible role and have a workaround within `conftest.py` as follows
```
import pytest
@pytest.fixture(scope='module')
def ansible_vars(request, host):
defaults_files1 = "file=../../defaults/main/common.yml name=role_defaults"
defaults_files2 = "file=../../defaults/main/directory.yml name=role_defaults"
defaults_files3 = "file=../../defaults/main/proxy.yml name=role_defaults"
defaults_files4 = "file=../../defaults/main/replication.yml name=role_defaults"
vars_files = "file=../../vars/main.yml name=role_vars"
ansible_vars = host.ansible(
"include_vars",
defaults_files1)["ansible_facts"]["role_defaults"]
ansible_vars.update(host.ansible(
"include_vars",
defaults_files2)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files3)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files4)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
vars_files)["ansible_facts"]["role_vars"])
return ansible_vars
```
but would prefer to use the correct command module to do so.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`include_vars`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
`Return nothing`
##### OS / ENVIRONMENT
Linux, Centos 7.6
##### STEPS TO REPRODUCE
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
##### EXPECTED RESULTS
It returns all the values defined in all the files within the directory
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/62633
|
https://github.com/ansible/ansible/pull/66581
|
3f16752ed2da6348f290d32f72afcc04f6061927
|
cc2376b782a0c423566f5f42968c5003387150ae
| 2019-09-20T04:25:36Z |
python
| 2020-01-28T16:50:34Z |
lib/ansible/plugins/action/include_vars.py
|
# Copyright: (c) 2016, Allen Sanabria <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from os import path, walk
import re
from ansible.errors import AnsibleError
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_native, to_text
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
TRANSFERS_FILES = False
VALID_FILE_EXTENSIONS = ['yaml', 'yml', 'json']
VALID_DIR_ARGUMENTS = ['dir', 'depth', 'files_matching', 'ignore_files', 'extensions', 'ignore_unknown_extensions']
VALID_FILE_ARGUMENTS = ['file', '_raw_params']
VALID_ALL = ['name']
def _set_dir_defaults(self):
if not self.depth:
self.depth = 0
if self.files_matching:
self.matcher = re.compile(r'{0}'.format(self.files_matching))
else:
self.matcher = None
if not self.ignore_files:
self.ignore_files = list()
if isinstance(self.ignore_files, string_types):
self.ignore_files = self.ignore_files.split()
elif isinstance(self.ignore_files, dict):
return {
'failed': True,
'message': '{0} must be a list'.format(self.ignore_files)
}
def _set_args(self):
""" Set instance variables based on the arguments that were passed """
self.return_results_as_name = self._task.args.get('name', None)
self.source_dir = self._task.args.get('dir', None)
self.source_file = self._task.args.get('file', None)
if not self.source_dir and not self.source_file:
self.source_file = self._task.args.get('_raw_params')
if self.source_file:
self.source_file = self.source_file.rstrip('\n')
self.depth = self._task.args.get('depth', None)
self.files_matching = self._task.args.get('files_matching', None)
self.ignore_unknown_extensions = self._task.args.get('ignore_unknown_extensions', False)
self.ignore_files = self._task.args.get('ignore_files', None)
self.valid_extensions = self._task.args.get('extensions', self.VALID_FILE_EXTENSIONS)
# convert/validate extensions list
if isinstance(self.valid_extensions, string_types):
self.valid_extensions = list(self.valid_extensions)
if not isinstance(self.valid_extensions, list):
raise AnsibleError('Invalid type for "extensions" option, it must be a list')
def run(self, tmp=None, task_vars=None):
""" Load yml files recursively from a directory.
"""
del tmp # tmp no longer has any effect
if task_vars is None:
task_vars = dict()
self.show_content = True
self.included_files = []
# Validate arguments
dirs = 0
files = 0
for arg in self._task.args:
if arg in self.VALID_DIR_ARGUMENTS:
dirs += 1
elif arg in self.VALID_FILE_ARGUMENTS:
files += 1
elif arg in self.VALID_ALL:
pass
else:
raise AnsibleError('{0} is not a valid option in include_vars'.format(to_native(arg)))
if dirs and files:
raise AnsibleError("You are mixing file only and dir only arguments, these are incompatible")
# set internal vars from args
self._set_args()
results = dict()
if self.source_dir:
self._set_dir_defaults()
self._set_root_dir()
if not path.exists(self.source_dir):
failed = True
err_msg = ('{0} directory does not exist'.format(to_native(self.source_dir)))
elif not path.isdir(self.source_dir):
failed = True
err_msg = ('{0} is not a directory'.format(to_native(self.source_dir)))
else:
for root_dir, filenames in self._traverse_dir_depth():
failed, err_msg, updated_results = (self._load_files_in_dir(root_dir, filenames))
if failed:
break
results.update(updated_results)
else:
try:
self.source_file = self._find_needle('vars', self.source_file)
failed, err_msg, updated_results = (
self._load_files(self.source_file)
)
if not failed:
results.update(updated_results)
except AnsibleError as e:
failed = True
err_msg = to_native(e)
if self.return_results_as_name:
scope = dict()
scope[self.return_results_as_name] = results
results = scope
result = super(ActionModule, self).run(task_vars=task_vars)
if failed:
result['failed'] = failed
result['message'] = err_msg
result['ansible_included_var_files'] = self.included_files
result['ansible_facts'] = results
result['_ansible_no_log'] = not self.show_content
return result
def _set_root_dir(self):
if self._task._role:
if self.source_dir.split('/')[0] == 'vars':
path_to_use = (
path.join(self._task._role._role_path, self.source_dir)
)
if path.exists(path_to_use):
self.source_dir = path_to_use
else:
path_to_use = (
path.join(
self._task._role._role_path, 'vars', self.source_dir
)
)
self.source_dir = path_to_use
else:
current_dir = (
"/".join(self._task._ds._data_source.split('/')[:-1])
)
self.source_dir = path.join(current_dir, self.source_dir)
def _traverse_dir_depth(self):
""" Recursively iterate over a directory and sort the files in
alphabetical order. Do not iterate pass the set depth.
The default depth is unlimited.
"""
current_depth = 0
sorted_walk = list(walk(self.source_dir))
sorted_walk.sort(key=lambda x: x[0])
for current_root, current_dir, current_files in sorted_walk:
current_depth += 1
if current_depth <= self.depth or self.depth == 0:
current_files.sort()
yield (current_root, current_files)
else:
break
def _ignore_file(self, filename):
""" Return True if a file matches the list of ignore_files.
Args:
filename (str): The filename that is being matched against.
Returns:
Boolean
"""
for file_type in self.ignore_files:
try:
if re.search(r'{0}$'.format(file_type), filename):
return True
except Exception:
err_msg = 'Invalid regular expression: {0}'.format(file_type)
raise AnsibleError(err_msg)
return False
def _is_valid_file_ext(self, source_file):
""" Verify if source file has a valid extension
Args:
source_file (str): The full path of source file or source file.
Returns:
Bool
"""
file_ext = path.splitext(source_file)
return bool(len(file_ext) > 1 and file_ext[-1][1:] in self.valid_extensions)
def _load_files(self, filename, validate_extensions=False):
""" Loads a file and converts the output into a valid Python dict.
Args:
filename (str): The source file.
Returns:
Tuple (bool, str, dict)
"""
results = dict()
failed = False
err_msg = ''
if validate_extensions and not self._is_valid_file_ext(filename):
failed = True
err_msg = ('{0} does not have a valid extension: {1}'.format(to_native(filename), ', '.join(self.valid_extensions)))
else:
b_data, show_content = self._loader._get_file_contents(filename)
data = to_text(b_data, errors='surrogate_or_strict')
self.show_content = show_content
data = self._loader.load(data, file_name=filename, show_content=show_content)
if not data:
data = dict()
if not isinstance(data, dict):
failed = True
err_msg = ('{0} must be stored as a dictionary/hash'.format(to_native(filename)))
else:
self.included_files.append(filename)
results.update(data)
return failed, err_msg, results
def _load_files_in_dir(self, root_dir, var_files):
""" Load the found yml files and update/overwrite the dictionary.
Args:
root_dir (str): The base directory of the list of files that is being passed.
var_files: (list): List of files to iterate over and load into a dictionary.
Returns:
Tuple (bool, str, dict)
"""
results = dict()
failed = False
err_msg = ''
for filename in var_files:
stop_iter = False
# Never include main.yml from a role, as that is the default included by the role
if self._task._role:
if path.join(self._task._role._role_path, filename) == path.join(root_dir, 'vars', 'main.yml'):
stop_iter = True
continue
filepath = path.join(root_dir, filename)
if self.files_matching:
if not self.matcher.search(filename):
stop_iter = True
if not stop_iter and not failed:
if self.ignore_unknown_extensions:
if path.exists(filepath) and not self._ignore_file(filename) and self._is_valid_file_ext(filename):
failed, err_msg, loaded_data = self._load_files(filepath, validate_extensions=True)
if not failed:
results.update(loaded_data)
else:
if path.exists(filepath) and not self._ignore_file(filename):
failed, err_msg, loaded_data = self._load_files(filepath, validate_extensions=True)
if not failed:
results.update(loaded_data)
return failed, err_msg, results
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,633 |
[linux][2.8] include_vars fails when used as standalone command with dir
|
##### SUMMARY
when the following command is run
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
it errors with
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
if i edit `/usr/lib/python2.7/site-packages/ansible/plugins/action/include_vars.py` and comment out line `100`
```
results = dict()
if self.source_dir:
self._set_dir_defaults()
#self._set_root_dir()
if not path.exists(self.source_dir):
```
and rerun it returns the values contained within that directory.
I am using `molecule` to test the ansible role and have a workaround within `conftest.py` as follows
```
import pytest
@pytest.fixture(scope='module')
def ansible_vars(request, host):
defaults_files1 = "file=../../defaults/main/common.yml name=role_defaults"
defaults_files2 = "file=../../defaults/main/directory.yml name=role_defaults"
defaults_files3 = "file=../../defaults/main/proxy.yml name=role_defaults"
defaults_files4 = "file=../../defaults/main/replication.yml name=role_defaults"
vars_files = "file=../../vars/main.yml name=role_vars"
ansible_vars = host.ansible(
"include_vars",
defaults_files1)["ansible_facts"]["role_defaults"]
ansible_vars.update(host.ansible(
"include_vars",
defaults_files2)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files3)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files4)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
vars_files)["ansible_facts"]["role_vars"])
return ansible_vars
```
but would prefer to use the correct command module to do so.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`include_vars`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
`Return nothing`
##### OS / ENVIRONMENT
Linux, Centos 7.6
##### STEPS TO REPRODUCE
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
##### EXPECTED RESULTS
It returns all the values defined in all the files within the directory
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/62633
|
https://github.com/ansible/ansible/pull/66581
|
3f16752ed2da6348f290d32f72afcc04f6061927
|
cc2376b782a0c423566f5f42968c5003387150ae
| 2019-09-20T04:25:36Z |
python
| 2020-01-28T16:50:34Z |
test/integration/targets/include_vars-ad-hoc/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,633 |
[linux][2.8] include_vars fails when used as standalone command with dir
|
##### SUMMARY
when the following command is run
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
it errors with
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
if i edit `/usr/lib/python2.7/site-packages/ansible/plugins/action/include_vars.py` and comment out line `100`
```
results = dict()
if self.source_dir:
self._set_dir_defaults()
#self._set_root_dir()
if not path.exists(self.source_dir):
```
and rerun it returns the values contained within that directory.
I am using `molecule` to test the ansible role and have a workaround within `conftest.py` as follows
```
import pytest
@pytest.fixture(scope='module')
def ansible_vars(request, host):
defaults_files1 = "file=../../defaults/main/common.yml name=role_defaults"
defaults_files2 = "file=../../defaults/main/directory.yml name=role_defaults"
defaults_files3 = "file=../../defaults/main/proxy.yml name=role_defaults"
defaults_files4 = "file=../../defaults/main/replication.yml name=role_defaults"
vars_files = "file=../../vars/main.yml name=role_vars"
ansible_vars = host.ansible(
"include_vars",
defaults_files1)["ansible_facts"]["role_defaults"]
ansible_vars.update(host.ansible(
"include_vars",
defaults_files2)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files3)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files4)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
vars_files)["ansible_facts"]["role_vars"])
return ansible_vars
```
but would prefer to use the correct command module to do so.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`include_vars`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
`Return nothing`
##### OS / ENVIRONMENT
Linux, Centos 7.6
##### STEPS TO REPRODUCE
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
##### EXPECTED RESULTS
It returns all the values defined in all the files within the directory
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/62633
|
https://github.com/ansible/ansible/pull/66581
|
3f16752ed2da6348f290d32f72afcc04f6061927
|
cc2376b782a0c423566f5f42968c5003387150ae
| 2019-09-20T04:25:36Z |
python
| 2020-01-28T16:50:34Z |
test/integration/targets/include_vars-ad-hoc/dir/inc.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,633 |
[linux][2.8] include_vars fails when used as standalone command with dir
|
##### SUMMARY
when the following command is run
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
it errors with
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
if i edit `/usr/lib/python2.7/site-packages/ansible/plugins/action/include_vars.py` and comment out line `100`
```
results = dict()
if self.source_dir:
self._set_dir_defaults()
#self._set_root_dir()
if not path.exists(self.source_dir):
```
and rerun it returns the values contained within that directory.
I am using `molecule` to test the ansible role and have a workaround within `conftest.py` as follows
```
import pytest
@pytest.fixture(scope='module')
def ansible_vars(request, host):
defaults_files1 = "file=../../defaults/main/common.yml name=role_defaults"
defaults_files2 = "file=../../defaults/main/directory.yml name=role_defaults"
defaults_files3 = "file=../../defaults/main/proxy.yml name=role_defaults"
defaults_files4 = "file=../../defaults/main/replication.yml name=role_defaults"
vars_files = "file=../../vars/main.yml name=role_vars"
ansible_vars = host.ansible(
"include_vars",
defaults_files1)["ansible_facts"]["role_defaults"]
ansible_vars.update(host.ansible(
"include_vars",
defaults_files2)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files3)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
defaults_files4)["ansible_facts"]["role_defaults"])
ansible_vars.update(host.ansible(
"include_vars",
vars_files)["ansible_facts"]["role_vars"])
return ansible_vars
```
but would prefer to use the correct command module to do so.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`include_vars`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
`Return nothing`
##### OS / ENVIRONMENT
Linux, Centos 7.6
##### STEPS TO REPRODUCE
`ansible -i /tmp/inventory -m include_vars --args "dir=../defaults name=role_defaults" --playbook-dir=../ansible-role-forgerock-ds frds`
##### EXPECTED RESULTS
It returns all the values defined in all the files within the directory
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute '_data_source'
frds-user-01 | FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
```
|
https://github.com/ansible/ansible/issues/62633
|
https://github.com/ansible/ansible/pull/66581
|
3f16752ed2da6348f290d32f72afcc04f6061927
|
cc2376b782a0c423566f5f42968c5003387150ae
| 2019-09-20T04:25:36Z |
python
| 2020-01-28T16:50:34Z |
test/integration/targets/include_vars-ad-hoc/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,830 |
ipify_facts integration tests fail intermittently
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The `ipify_facts` integration tests fail intermittently. We may be hitting API limits, but it's hard to tell based on the error message.
https://app.shippable.com/github/ansible/ansible/runs/157231/73/tests
```
{
"changed": false,
"msg": "No valid or no response from url https://api.ipify.org/ within 30 seconds (timeout)"
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/ipify_facts/`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Shippable
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```shell
ansible-test integration --docker centos8 ipify_facts
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Tests fail
<!--- Paste verbatim command output between quotes -->
```paste below
{
"changed": false,
"msg": "No valid or no response from url https://api.ipify.org/ within 30 seconds (timeout)"
}
```
|
https://github.com/ansible/ansible/issues/66830
|
https://github.com/ansible/ansible/pull/66897
|
8ba324a33db302ed705fd1935955dfd718462777
|
91063f40d6470b418616bee638030988ae08bdc9
| 2020-01-27T22:15:26Z |
python
| 2020-01-29T15:43:40Z |
test/integration/targets/ipify_facts/tasks/main.yml
|
# Test code for the ipify_facts
# (c) 2017, Abhijeet Kasurde <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
- debug: var=ansible_distribution
- debug: var=ansible_distribution_version
- set_fact:
validate_certs: false
when: (ansible_distribution == "MacOSX" and ansible_distribution_version == "10.11.1")
- name: get information about current IP using ipify facts
ipify_facts:
timeout: 30
validate_certs: "{{ validate_certs }}"
register: external_ip
- debug: var="{{ external_ip }}"
- name: check if task was successful
assert:
that:
- "{{ external_ip.changed == false }}"
- "{{ external_ip['ansible_facts'] is defined }}"
- "{{ external_ip['ansible_facts']['ipify_public_ip'] is defined }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,174 |
zabbix_host: support usermacros
|
##### SUMMARY
[`zabbix_host`](https://docs.ansible.com/ansible/latest/modules/zabbix_host_module.html) module currently does not support specifying usermacros.
While usermacros can be added separately with [`zabbix_hostmacro`](https://docs.ansible.com/ansible/latest/modules/zabbix_hostmacro_module.html) module, that splits up the configuration and makes it very hard to maintain.
It would be great to specify macros in the host definition of `zabbix_host`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
zabbix_host
|
https://github.com/ansible/ansible/issues/66174
|
https://github.com/ansible/ansible/pull/66777
|
5fdc9a61f0ea41cb22c28e7e63a30603579db88c
|
e3190adcbb2ff41ddf627eb98becbb7bc5838e62
| 2020-01-03T12:49:45Z |
python
| 2020-01-30T13:32:47Z |
changelogs/fragments/66777-zabbix_host_tags_macros_support.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,174 |
zabbix_host: support usermacros
|
##### SUMMARY
[`zabbix_host`](https://docs.ansible.com/ansible/latest/modules/zabbix_host_module.html) module currently does not support specifying usermacros.
While usermacros can be added separately with [`zabbix_hostmacro`](https://docs.ansible.com/ansible/latest/modules/zabbix_hostmacro_module.html) module, that splits up the configuration and makes it very hard to maintain.
It would be great to specify macros in the host definition of `zabbix_host`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
zabbix_host
|
https://github.com/ansible/ansible/issues/66174
|
https://github.com/ansible/ansible/pull/66777
|
5fdc9a61f0ea41cb22c28e7e63a30603579db88c
|
e3190adcbb2ff41ddf627eb98becbb7bc5838e62
| 2020-01-03T12:49:45Z |
python
| 2020-01-30T13:32:47Z |
lib/ansible/modules/monitoring/zabbix/zabbix_host.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013-2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: zabbix_host
short_description: Create/update/delete Zabbix hosts
description:
- This module allows you to create, modify and delete Zabbix host entries and associated group and template data.
version_added: "2.0"
author:
- "Cove (@cove)"
- Tony Minfei Ding (!UNKNOWN)
- Harrison Gu (@harrisongu)
- Werner Dijkerman (@dj-wasabi)
- Eike Frost (@eikef)
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.4"
options:
host_name:
description:
- Name of the host in Zabbix.
- I(host_name) is the unique identifier used and cannot be updated using this module.
required: true
type: str
visible_name:
description:
- Visible name of the host in Zabbix.
version_added: '2.3'
type: str
description:
description:
- Description of the host in Zabbix.
version_added: '2.5'
type: str
host_groups:
description:
- List of host groups the host is part of.
type: list
elements: str
link_templates:
description:
- List of templates linked to the host.
type: list
elements: str
inventory_mode:
description:
- Configure the inventory mode.
choices: ['automatic', 'manual', 'disabled']
version_added: '2.1'
type: str
inventory_zabbix:
description:
- Add Facts for a zabbix inventory (e.g. Tag) (see example below).
- Please review the interface documentation for more information on the supported properties
- U(https://www.zabbix.com/documentation/3.2/manual/api/reference/host/object#host_inventory)
version_added: '2.5'
type: dict
status:
description:
- Monitoring status of the host.
choices: ['enabled', 'disabled']
default: 'enabled'
type: str
state:
description:
- State of the host.
- On C(present), it will create if host does not exist or update the host if the associated data is different.
- On C(absent) will remove a host if it exists.
choices: ['present', 'absent']
default: 'present'
type: str
proxy:
description:
- The name of the Zabbix proxy to be used.
type: str
interfaces:
type: list
elements: dict
description:
- List of interfaces to be created for the host (see example below).
- For more information, review host interface documentation at
- U(https://www.zabbix.com/documentation/4.0/manual/api/reference/hostinterface/object)
suboptions:
type:
description:
- Interface type to add
- Numerical values are also accepted for interface type
- 1 = agent
- 2 = snmp
- 3 = ipmi
- 4 = jmx
choices: ['agent', 'snmp', 'ipmi', 'jmx']
required: true
main:
type: int
description:
- Whether the interface is used as default.
- If multiple interfaces with the same type are provided, only one can be default.
- 0 (not default), 1 (default)
default: 0
choices: [0, 1]
useip:
type: int
description:
- Connect to host interface with IP address instead of DNS name.
- 0 (don't use ip), 1 (use ip)
default: 0
choices: [0, 1]
ip:
type: str
description:
- IP address used by host interface.
- Required if I(useip=1).
default: ''
dns:
type: str
description:
- DNS name of the host interface.
- Required if I(useip=0).
default: ''
port:
type: str
description:
- Port used by host interface.
- If not specified, default port for each type of interface is used
- 10050 if I(type='agent')
- 161 if I(type='snmp')
- 623 if I(type='ipmi')
- 12345 if I(type='jmx')
bulk:
type: int
description:
- Whether to use bulk SNMP requests.
- 0 (don't use bulk requests), 1 (use bulk requests)
choices: [0, 1]
default: 1
default: []
tls_connect:
description:
- Specifies what encryption to use for outgoing connections.
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
type: int
tls_accept:
description:
- Specifies what types of connections are allowed for incoming connections.
- The tls_accept parameter accepts values of 1 to 7
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Values can be combined.
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
type: int
tls_psk_identity:
description:
- It is a unique name by which this specific PSK is referred to by Zabbix components
- Do not put sensitive information in the PSK identity string, it is transmitted over the network unencrypted.
- Works only with >= Zabbix 3.0
version_added: '2.5'
type: str
tls_psk:
description:
- PSK value is a hard to guess string of hexadecimal digits.
- The preshared key, at least 32 hex digits. Required if either I(tls_connect) or I(tls_accept) has PSK enabled.
- Works only with >= Zabbix 3.0
version_added: '2.5'
type: str
ca_cert:
description:
- Required certificate issuer.
- Works only with >= Zabbix 3.0
version_added: '2.5'
aliases: [ tls_issuer ]
type: str
tls_subject:
description:
- Required certificate subject.
- Works only with >= Zabbix 3.0
version_added: '2.5'
type: str
ipmi_authtype:
description:
- IPMI authentication algorithm.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are, C(0) (none), C(1) (MD2), C(2) (MD5), C(4) (straight), C(5) (OEM), C(6) (RMCP+),
with -1 being the API default.
- Please note that the Zabbix API will treat absent settings as default when updating
any of the I(ipmi_)-options; this means that if you attempt to set any of the four
options individually, the rest will be reset to default values.
version_added: '2.5'
type: int
ipmi_privilege:
description:
- IPMI privilege level.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are C(1) (callback), C(2) (user), C(3) (operator), C(4) (admin), C(5) (OEM), with C(2)
being the API default.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
type: int
ipmi_username:
description:
- IPMI username.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
type: str
ipmi_password:
description:
- IPMI password.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
type: str
force:
description:
- Overwrite the host configuration, even if already present.
type: bool
default: 'yes'
version_added: '2.0'
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = r'''
- name: Create a new host or update an existing host's info
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Example group1
- Example group2
link_templates:
- Example template1
- Example template2
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: "{{ your_tag }}"
alias: "{{ your_alias }}"
notes: "Special Informations: {{ your_informations | default('None') }}"
location: "{{ your_location }}"
site_rack: "{{ your_site_rack }}"
os: "{{ your_os }}"
hardware: "{{ your_hardware }}"
ipmi_authtype: 2
ipmi_privilege: 4
ipmi_username: username
ipmi_password: password
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "10050"
- type: 4
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "12345"
proxy: a.zabbix.proxy
- name: Update an existing host's TLS settings
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
host_groups:
- Example group1
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
'''
import atexit
import copy
import traceback
try:
from zabbix_api import ZabbixAPI
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class Host(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
self._zbx_api_version = zbx.api_version()[:5]
# exist host
def is_host_exist(self, host_name):
result = self._zapi.host.get({'filter': {'host': host_name}})
return result
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" % group_name)
return True
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get({'output': 'extend', 'filter': {'host': template}})
if len(template_list) < 1:
self._module.fail_json(msg="Template not found: %s" % template)
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_host(self, host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'host': host_name, 'interfaces': interfaces, 'groups': group_ids, 'status': status,
'tls_connect': tls_connect, 'tls_accept': tls_accept}
if proxy_id:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity is not None:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
parameters['tls_psk'] = tls_psk
if tls_issuer is not None:
parameters['tls_issuer'] = tls_issuer
if tls_subject is not None:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype is not None:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege is not None:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username is not None:
parameters['ipmi_username'] = ipmi_username
if ipmi_password is not None:
parameters['ipmi_password'] = ipmi_password
host_list = self._zapi.host.create(parameters)
if len(host_list) >= 1:
return host_list['hostids'][0]
except Exception as e:
self._module.fail_json(msg="Failed to create host %s: %s" % (host_name, e))
def update_host(self, host_name, group_ids, status, host_id, interfaces, exist_interface_list, proxy_id,
visible_name, description, tls_connect, tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype,
ipmi_privilege, ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'hostid': host_id, 'groups': group_ids, 'status': status, 'tls_connect': tls_connect,
'tls_accept': tls_accept}
if proxy_id >= 0:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk:
parameters['tls_psk'] = tls_psk
if tls_issuer:
parameters['tls_issuer'] = tls_issuer
if tls_subject:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username:
parameters['ipmi_username'] = ipmi_username
if ipmi_password:
parameters['ipmi_password'] = ipmi_password
self._zapi.host.update(parameters)
interface_list_copy = exist_interface_list
if interfaces:
for interface in interfaces:
flag = False
interface_str = interface
for exist_interface in exist_interface_list:
interface_type = int(interface['type'])
exist_interface_type = int(exist_interface['type'])
if interface_type == exist_interface_type:
# update
interface_str['interfaceid'] = exist_interface['interfaceid']
self._zapi.hostinterface.update(interface_str)
flag = True
interface_list_copy.remove(exist_interface)
break
if not flag:
# add
interface_str['hostid'] = host_id
self._zapi.hostinterface.create(interface_str)
# remove
remove_interface_ids = []
for remove_interface in interface_list_copy:
interface_id = remove_interface['interfaceid']
remove_interface_ids.append(interface_id)
if len(remove_interface_ids) > 0:
self._zapi.hostinterface.delete(remove_interface_ids)
except Exception as e:
self._module.fail_json(msg="Failed to update host %s: %s" % (host_name, e))
def delete_host(self, host_id, host_name):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.delete([host_id])
except Exception as e:
self._module.fail_json(msg="Failed to delete host %s: %s" % (host_name, e))
# get host by host name
def get_host_by_host_name(self, host_name):
host_list = self._zapi.host.get({'output': 'extend', 'selectInventory': 'extend', 'filter': {'host': [host_name]}})
if len(host_list) < 1:
self._module.fail_json(msg="Host not found: %s" % host_name)
else:
return host_list[0]
# get proxyid by proxy name
def get_proxyid_by_proxy_name(self, proxy_name):
proxy_list = self._zapi.proxy.get({'output': 'extend', 'filter': {'host': [proxy_name]}})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return int(proxy_list[0]['proxyid'])
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
if self.check_host_group_exist(group_names):
return self._zapi.hostgroup.get({'output': 'groupid', 'filter': {'name': group_names}})
# get host groups ids by host id
def get_group_ids_by_host_id(self, host_id):
return self._zapi.hostgroup.get({'output': 'groupid', 'hostids': host_id})
# get host templates by host id
def get_host_templates_by_host_id(self, host_id):
template_ids = []
template_list = self._zapi.template.get({'output': 'extend', 'hostids': host_id})
for template in template_list:
template_ids.append(template['templateid'])
return template_ids
# check the exist_interfaces whether it equals the interfaces or not
def check_interface_properties(self, exist_interface_list, interfaces):
interfaces_port_list = []
if interfaces is not None:
if len(interfaces) >= 1:
for interface in interfaces:
interfaces_port_list.append(str(interface['port']))
exist_interface_ports = []
if len(exist_interface_list) >= 1:
for exist_interface in exist_interface_list:
exist_interface_ports.append(str(exist_interface['port']))
if set(interfaces_port_list) != set(exist_interface_ports):
return True
for exist_interface in exist_interface_list:
exit_interface_port = str(exist_interface['port'])
for interface in interfaces:
interface_port = str(interface['port'])
if interface_port == exit_interface_port:
for key in interface.keys():
if str(exist_interface[key]) != str(interface[key]):
return True
return False
# get the status of host by host
def get_host_status_by_host(self, host):
return host['status']
# check all the properties before link or clear template
def check_all_properties(self, host_id, group_ids, status, interfaces, template_ids,
exist_interfaces, host, proxy_id, visible_name, description, host_name,
inventory_mode, inventory_zabbix, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, tls_connect, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
# get the existing host's groups
exist_host_groups = sorted(self.get_group_ids_by_host_id(host_id), key=lambda k: k['groupid'])
if sorted(group_ids, key=lambda k: k['groupid']) != exist_host_groups:
return True
# get the existing status
exist_status = self.get_host_status_by_host(host)
if int(status) != int(exist_status):
return True
# check the exist_interfaces whether it equals the interfaces or not
if self.check_interface_properties(exist_interfaces, interfaces):
return True
# get the existing templates
exist_template_ids = self.get_host_templates_by_host_id(host_id)
if set(list(template_ids)) != set(exist_template_ids):
return True
if int(host['proxy_hostid']) != int(proxy_id):
return True
# Check whether the visible_name has changed; Zabbix defaults to the technical hostname if not set.
if visible_name:
if host['name'] != visible_name:
return True
# Only compare description if it is given as a module parameter
if description:
if host['description'] != description:
return True
if inventory_mode:
if LooseVersion(self._zbx_api_version) <= LooseVersion('4.4.0'):
if host['inventory']:
if int(host['inventory']['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
elif inventory_mode != 'disabled':
return True
else:
if int(host['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
if inventory_zabbix:
proposed_inventory = copy.deepcopy(host['inventory'])
proposed_inventory.update(inventory_zabbix)
if proposed_inventory != host['inventory']:
return True
if tls_accept is not None and 'tls_accept' in host:
if int(host['tls_accept']) != tls_accept:
return True
if tls_psk_identity is not None and 'tls_psk_identity' in host:
if host['tls_psk_identity'] != tls_psk_identity:
return True
if tls_psk is not None and 'tls_psk' in host:
if host['tls_psk'] != tls_psk:
return True
if tls_issuer is not None and 'tls_issuer' in host:
if host['tls_issuer'] != tls_issuer:
return True
if tls_subject is not None and 'tls_subject' in host:
if host['tls_subject'] != tls_subject:
return True
if tls_connect is not None and 'tls_connect' in host:
if int(host['tls_connect']) != tls_connect:
return True
if ipmi_authtype is not None:
if int(host['ipmi_authtype']) != ipmi_authtype:
return True
if ipmi_privilege is not None:
if int(host['ipmi_privilege']) != ipmi_privilege:
return True
if ipmi_username is not None:
if host['ipmi_username'] != ipmi_username:
return True
if ipmi_password is not None:
if host['ipmi_password'] != ipmi_password:
return True
return False
# link or clear template of the host
def link_or_clear_template(self, host_id, template_id_list, tls_connect, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
# get host's exist template ids
exist_template_id_list = self.get_host_templates_by_host_id(host_id)
exist_template_ids = set(exist_template_id_list)
template_ids = set(template_id_list)
template_id_list = list(template_ids)
# get unlink and clear templates
templates_clear = exist_template_ids.difference(template_ids)
templates_clear_list = list(templates_clear)
request_str = {'hostid': host_id, 'templates': template_id_list, 'templates_clear': templates_clear_list,
'tls_connect': tls_connect, 'tls_accept': tls_accept, 'ipmi_authtype': ipmi_authtype,
'ipmi_privilege': ipmi_privilege, 'ipmi_username': ipmi_username, 'ipmi_password': ipmi_password}
if tls_psk_identity is not None:
request_str['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
request_str['tls_psk'] = tls_psk
if tls_issuer is not None:
request_str['tls_issuer'] = tls_issuer
if tls_subject is not None:
request_str['tls_subject'] = tls_subject
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to link template to host: %s" % e)
def inventory_mode_numeric(self, inventory_mode):
if inventory_mode == "automatic":
return int(1)
elif inventory_mode == "manual":
return int(0)
elif inventory_mode == "disabled":
return int(-1)
return inventory_mode
# Update the host inventory_mode
def update_inventory_mode(self, host_id, inventory_mode):
# nothing was set, do nothing
if not inventory_mode:
return
inventory_mode = self.inventory_mode_numeric(inventory_mode)
# watch for - https://support.zabbix.com/browse/ZBX-6033
request_str = {'hostid': host_id, 'inventory_mode': inventory_mode}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory_mode to host: %s" % e)
def update_inventory_zabbix(self, host_id, inventory):
if not inventory:
return
request_str = {'hostid': host_id, 'inventory': inventory}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory to host: %s" % e)
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
host_name=dict(type='str', required=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
host_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
status=dict(type='str', default="enabled", choices=['enabled', 'disabled']),
state=dict(type='str', default="present", choices=['present', 'absent']),
inventory_mode=dict(type='str', required=False, choices=['automatic', 'manual', 'disabled']),
ipmi_authtype=dict(type='int', default=None),
ipmi_privilege=dict(type='int', default=None),
ipmi_username=dict(type='str', required=False, default=None),
ipmi_password=dict(type='str', required=False, default=None, no_log=True),
tls_connect=dict(type='int', default=1),
tls_accept=dict(type='int', default=1),
tls_psk_identity=dict(type='str', required=False),
tls_psk=dict(type='str', required=False),
ca_cert=dict(type='str', required=False, aliases=['tls_issuer']),
tls_subject=dict(type='str', required=False),
inventory_zabbix=dict(type='dict', required=False),
timeout=dict(type='int', default=10),
interfaces=dict(type='list', required=False),
force=dict(type='bool', default=True),
proxy=dict(type='str', required=False),
visible_name=dict(type='str', required=False),
description=dict(type='str', required=False)
),
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
host_name = module.params['host_name']
visible_name = module.params['visible_name']
description = module.params['description']
host_groups = module.params['host_groups']
link_templates = module.params['link_templates']
inventory_mode = module.params['inventory_mode']
ipmi_authtype = module.params['ipmi_authtype']
ipmi_privilege = module.params['ipmi_privilege']
ipmi_username = module.params['ipmi_username']
ipmi_password = module.params['ipmi_password']
tls_connect = module.params['tls_connect']
tls_accept = module.params['tls_accept']
tls_psk_identity = module.params['tls_psk_identity']
tls_psk = module.params['tls_psk']
tls_issuer = module.params['ca_cert']
tls_subject = module.params['tls_subject']
inventory_zabbix = module.params['inventory_zabbix']
status = module.params['status']
state = module.params['state']
timeout = module.params['timeout']
interfaces = module.params['interfaces']
force = module.params['force']
proxy = module.params['proxy']
# convert enabled to 0; disabled to 1
status = 1 if status == "disabled" else 0
zbx = None
# login to zabbix
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except Exception as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
host = Host(module, zbx)
template_ids = []
if link_templates:
template_ids = host.get_template_ids(link_templates)
group_ids = []
if host_groups:
group_ids = host.get_group_ids_by_group_names(host_groups)
ip = ""
if interfaces:
# ensure interfaces are well-formed
for interface in interfaces:
if 'type' not in interface:
module.fail_json(msg="(interface) type needs to be specified for interface '%s'." % interface)
interfacetypes = {'agent': 1, 'snmp': 2, 'ipmi': 3, 'jmx': 4}
if interface['type'] in interfacetypes.keys():
interface['type'] = interfacetypes[interface['type']]
if interface['type'] < 1 or interface['type'] > 4:
module.fail_json(msg="Interface type can only be 1-4 for interface '%s'." % interface)
if 'useip' not in interface:
interface['useip'] = 0
if 'dns' not in interface:
if interface['useip'] == 0:
module.fail_json(msg="dns needs to be set if useip is 0 on interface '%s'." % interface)
interface['dns'] = ''
if 'ip' not in interface:
if interface['useip'] == 1:
module.fail_json(msg="ip needs to be set if useip is 1 on interface '%s'." % interface)
interface['ip'] = ''
if 'main' not in interface:
interface['main'] = 0
if 'port' in interface and not isinstance(interface['port'], str):
try:
interface['port'] = str(interface['port'])
except ValueError:
module.fail_json(msg="port should be convertable to string on interface '%s'." % interface)
if 'port' not in interface:
if interface['type'] == 1:
interface['port'] = "10050"
elif interface['type'] == 2:
interface['port'] = "161"
elif interface['type'] == 3:
interface['port'] = "623"
elif interface['type'] == 4:
interface['port'] = "12345"
if interface['type'] == 1:
ip = interface['ip']
# Use proxy specified, or set to 0
if proxy:
proxy_id = host.get_proxyid_by_proxy_name(proxy)
else:
proxy_id = 0
# check if host exist
is_host_exist = host.is_host_exist(host_name)
if is_host_exist:
# get host id by host name
zabbix_host_obj = host.get_host_by_host_name(host_name)
host_id = zabbix_host_obj['hostid']
# If proxy is not specified as a module parameter, use the existing setting
if proxy is None:
proxy_id = int(zabbix_host_obj['proxy_hostid'])
if state == "absent":
# remove host
host.delete_host(host_id, host_name)
module.exit_json(changed=True, result="Successfully delete host %s" % host_name)
else:
if not host_groups:
# if host_groups have not been specified when updating an existing host, just
# get the group_ids from the existing host without updating them.
group_ids = host.get_group_ids_by_host_id(host_id)
# get existing host's interfaces
exist_interfaces = host._zapi.hostinterface.get({'output': 'extend', 'hostids': host_id})
# if no interfaces were specified with the module, start with an empty list
if not interfaces:
interfaces = []
# When force=no is specified, append existing interfaces to interfaces to update. When
# no interfaces have been specified, copy existing interfaces as specified from the API.
# Do the same with templates and host groups.
if not force or not interfaces:
for interface in copy.deepcopy(exist_interfaces):
# remove values not used during hostinterface.add/update calls
for key in tuple(interface.keys()):
if key in ['interfaceid', 'hostid', 'bulk']:
interface.pop(key, None)
for index in interface.keys():
if index in ['useip', 'main', 'type']:
interface[index] = int(interface[index])
if interface not in interfaces:
interfaces.append(interface)
if not force or link_templates is None:
template_ids = list(set(template_ids + host.get_host_templates_by_host_id(host_id)))
if not force:
for group_id in host.get_group_ids_by_host_id(host_id):
if group_id not in group_ids:
group_ids.append(group_id)
# update host
if host.check_all_properties(host_id, group_ids, status, interfaces, template_ids,
exist_interfaces, zabbix_host_obj, proxy_id, visible_name,
description, host_name, inventory_mode, inventory_zabbix,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, tls_connect,
ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
host.update_host(host_name, group_ids, status, host_id,
interfaces, exist_interfaces, proxy_id, visible_name, description, tls_connect, tls_accept,
tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True,
result="Successfully update host %s (%s) and linked with template '%s'"
% (host_name, ip, link_templates))
else:
module.exit_json(changed=False)
else:
if state == "absent":
# the host is already deleted.
module.exit_json(changed=False)
if not group_ids:
module.fail_json(msg="Specify at least one group for creating host '%s'." % host_name)
if not interfaces or (interfaces and len(interfaces) == 0):
module.fail_json(msg="Specify at least one interface for creating host '%s'." % host_name)
# create host
host_id = host.add_host(host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True, result="Successfully added host %s (%s) and linked with template '%s'" % (
host_name, ip, link_templates))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,278 |
zabbix_host add tags support
|
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
zabbix_host already has support for the inventory tag and Asset tag but the general host tags are still missing. They would be a nice addition.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
zabbix_host.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
It would make event correlation a bit easier and I am sire many more things.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create a new host or update an existing host's info
local_action:
module: zabbix_host
server_url: "{{zabbix_server.url}}"
login_user: "{{zabbix_server.username}}"
login_password: "{{zabbix_server['password']}}"
host_name: "{{item.name}}"
host_groups: "{{item.groups | default(omit)}}"
link_templates: "{{item.templates | default(omit)}}"
tags: "{{item.tags}}"
state: "{{item.state}}"
status: "{{item.status}}"
inventory_mode: automatic
inventory_zabbix:
asset_tag: "{{item.CI}}"
alias: "{{item.HOST}}"
interfaces:
- type: 1
main: 1
useip: 1
ip: 127.0.0.1
dns: ""
port: 10050
with_items:
```
tags would need to be a list of name value pairs.
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64278
|
https://github.com/ansible/ansible/pull/66777
|
5fdc9a61f0ea41cb22c28e7e63a30603579db88c
|
e3190adcbb2ff41ddf627eb98becbb7bc5838e62
| 2019-11-01T15:46:33Z |
python
| 2020-01-30T13:32:47Z |
changelogs/fragments/66777-zabbix_host_tags_macros_support.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,278 |
zabbix_host add tags support
|
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
zabbix_host already has support for the inventory tag and Asset tag but the general host tags are still missing. They would be a nice addition.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
zabbix_host.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
It would make event correlation a bit easier and I am sire many more things.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create a new host or update an existing host's info
local_action:
module: zabbix_host
server_url: "{{zabbix_server.url}}"
login_user: "{{zabbix_server.username}}"
login_password: "{{zabbix_server['password']}}"
host_name: "{{item.name}}"
host_groups: "{{item.groups | default(omit)}}"
link_templates: "{{item.templates | default(omit)}}"
tags: "{{item.tags}}"
state: "{{item.state}}"
status: "{{item.status}}"
inventory_mode: automatic
inventory_zabbix:
asset_tag: "{{item.CI}}"
alias: "{{item.HOST}}"
interfaces:
- type: 1
main: 1
useip: 1
ip: 127.0.0.1
dns: ""
port: 10050
with_items:
```
tags would need to be a list of name value pairs.
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64278
|
https://github.com/ansible/ansible/pull/66777
|
5fdc9a61f0ea41cb22c28e7e63a30603579db88c
|
e3190adcbb2ff41ddf627eb98becbb7bc5838e62
| 2019-11-01T15:46:33Z |
python
| 2020-01-30T13:32:47Z |
lib/ansible/modules/monitoring/zabbix/zabbix_host.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013-2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: zabbix_host
short_description: Create/update/delete Zabbix hosts
description:
- This module allows you to create, modify and delete Zabbix host entries and associated group and template data.
version_added: "2.0"
author:
- "Cove (@cove)"
- Tony Minfei Ding (!UNKNOWN)
- Harrison Gu (@harrisongu)
- Werner Dijkerman (@dj-wasabi)
- Eike Frost (@eikef)
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.4"
options:
host_name:
description:
- Name of the host in Zabbix.
- I(host_name) is the unique identifier used and cannot be updated using this module.
required: true
type: str
visible_name:
description:
- Visible name of the host in Zabbix.
version_added: '2.3'
type: str
description:
description:
- Description of the host in Zabbix.
version_added: '2.5'
type: str
host_groups:
description:
- List of host groups the host is part of.
type: list
elements: str
link_templates:
description:
- List of templates linked to the host.
type: list
elements: str
inventory_mode:
description:
- Configure the inventory mode.
choices: ['automatic', 'manual', 'disabled']
version_added: '2.1'
type: str
inventory_zabbix:
description:
- Add Facts for a zabbix inventory (e.g. Tag) (see example below).
- Please review the interface documentation for more information on the supported properties
- U(https://www.zabbix.com/documentation/3.2/manual/api/reference/host/object#host_inventory)
version_added: '2.5'
type: dict
status:
description:
- Monitoring status of the host.
choices: ['enabled', 'disabled']
default: 'enabled'
type: str
state:
description:
- State of the host.
- On C(present), it will create if host does not exist or update the host if the associated data is different.
- On C(absent) will remove a host if it exists.
choices: ['present', 'absent']
default: 'present'
type: str
proxy:
description:
- The name of the Zabbix proxy to be used.
type: str
interfaces:
type: list
elements: dict
description:
- List of interfaces to be created for the host (see example below).
- For more information, review host interface documentation at
- U(https://www.zabbix.com/documentation/4.0/manual/api/reference/hostinterface/object)
suboptions:
type:
description:
- Interface type to add
- Numerical values are also accepted for interface type
- 1 = agent
- 2 = snmp
- 3 = ipmi
- 4 = jmx
choices: ['agent', 'snmp', 'ipmi', 'jmx']
required: true
main:
type: int
description:
- Whether the interface is used as default.
- If multiple interfaces with the same type are provided, only one can be default.
- 0 (not default), 1 (default)
default: 0
choices: [0, 1]
useip:
type: int
description:
- Connect to host interface with IP address instead of DNS name.
- 0 (don't use ip), 1 (use ip)
default: 0
choices: [0, 1]
ip:
type: str
description:
- IP address used by host interface.
- Required if I(useip=1).
default: ''
dns:
type: str
description:
- DNS name of the host interface.
- Required if I(useip=0).
default: ''
port:
type: str
description:
- Port used by host interface.
- If not specified, default port for each type of interface is used
- 10050 if I(type='agent')
- 161 if I(type='snmp')
- 623 if I(type='ipmi')
- 12345 if I(type='jmx')
bulk:
type: int
description:
- Whether to use bulk SNMP requests.
- 0 (don't use bulk requests), 1 (use bulk requests)
choices: [0, 1]
default: 1
default: []
tls_connect:
description:
- Specifies what encryption to use for outgoing connections.
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
type: int
tls_accept:
description:
- Specifies what types of connections are allowed for incoming connections.
- The tls_accept parameter accepts values of 1 to 7
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Values can be combined.
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
type: int
tls_psk_identity:
description:
- It is a unique name by which this specific PSK is referred to by Zabbix components
- Do not put sensitive information in the PSK identity string, it is transmitted over the network unencrypted.
- Works only with >= Zabbix 3.0
version_added: '2.5'
type: str
tls_psk:
description:
- PSK value is a hard to guess string of hexadecimal digits.
- The preshared key, at least 32 hex digits. Required if either I(tls_connect) or I(tls_accept) has PSK enabled.
- Works only with >= Zabbix 3.0
version_added: '2.5'
type: str
ca_cert:
description:
- Required certificate issuer.
- Works only with >= Zabbix 3.0
version_added: '2.5'
aliases: [ tls_issuer ]
type: str
tls_subject:
description:
- Required certificate subject.
- Works only with >= Zabbix 3.0
version_added: '2.5'
type: str
ipmi_authtype:
description:
- IPMI authentication algorithm.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are, C(0) (none), C(1) (MD2), C(2) (MD5), C(4) (straight), C(5) (OEM), C(6) (RMCP+),
with -1 being the API default.
- Please note that the Zabbix API will treat absent settings as default when updating
any of the I(ipmi_)-options; this means that if you attempt to set any of the four
options individually, the rest will be reset to default values.
version_added: '2.5'
type: int
ipmi_privilege:
description:
- IPMI privilege level.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are C(1) (callback), C(2) (user), C(3) (operator), C(4) (admin), C(5) (OEM), with C(2)
being the API default.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
type: int
ipmi_username:
description:
- IPMI username.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
type: str
ipmi_password:
description:
- IPMI password.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
type: str
force:
description:
- Overwrite the host configuration, even if already present.
type: bool
default: 'yes'
version_added: '2.0'
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = r'''
- name: Create a new host or update an existing host's info
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Example group1
- Example group2
link_templates:
- Example template1
- Example template2
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: "{{ your_tag }}"
alias: "{{ your_alias }}"
notes: "Special Informations: {{ your_informations | default('None') }}"
location: "{{ your_location }}"
site_rack: "{{ your_site_rack }}"
os: "{{ your_os }}"
hardware: "{{ your_hardware }}"
ipmi_authtype: 2
ipmi_privilege: 4
ipmi_username: username
ipmi_password: password
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "10050"
- type: 4
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "12345"
proxy: a.zabbix.proxy
- name: Update an existing host's TLS settings
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
host_groups:
- Example group1
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
'''
import atexit
import copy
import traceback
try:
from zabbix_api import ZabbixAPI
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class Host(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
self._zbx_api_version = zbx.api_version()[:5]
# exist host
def is_host_exist(self, host_name):
result = self._zapi.host.get({'filter': {'host': host_name}})
return result
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" % group_name)
return True
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get({'output': 'extend', 'filter': {'host': template}})
if len(template_list) < 1:
self._module.fail_json(msg="Template not found: %s" % template)
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_host(self, host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'host': host_name, 'interfaces': interfaces, 'groups': group_ids, 'status': status,
'tls_connect': tls_connect, 'tls_accept': tls_accept}
if proxy_id:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity is not None:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
parameters['tls_psk'] = tls_psk
if tls_issuer is not None:
parameters['tls_issuer'] = tls_issuer
if tls_subject is not None:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype is not None:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege is not None:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username is not None:
parameters['ipmi_username'] = ipmi_username
if ipmi_password is not None:
parameters['ipmi_password'] = ipmi_password
host_list = self._zapi.host.create(parameters)
if len(host_list) >= 1:
return host_list['hostids'][0]
except Exception as e:
self._module.fail_json(msg="Failed to create host %s: %s" % (host_name, e))
def update_host(self, host_name, group_ids, status, host_id, interfaces, exist_interface_list, proxy_id,
visible_name, description, tls_connect, tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype,
ipmi_privilege, ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'hostid': host_id, 'groups': group_ids, 'status': status, 'tls_connect': tls_connect,
'tls_accept': tls_accept}
if proxy_id >= 0:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk:
parameters['tls_psk'] = tls_psk
if tls_issuer:
parameters['tls_issuer'] = tls_issuer
if tls_subject:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username:
parameters['ipmi_username'] = ipmi_username
if ipmi_password:
parameters['ipmi_password'] = ipmi_password
self._zapi.host.update(parameters)
interface_list_copy = exist_interface_list
if interfaces:
for interface in interfaces:
flag = False
interface_str = interface
for exist_interface in exist_interface_list:
interface_type = int(interface['type'])
exist_interface_type = int(exist_interface['type'])
if interface_type == exist_interface_type:
# update
interface_str['interfaceid'] = exist_interface['interfaceid']
self._zapi.hostinterface.update(interface_str)
flag = True
interface_list_copy.remove(exist_interface)
break
if not flag:
# add
interface_str['hostid'] = host_id
self._zapi.hostinterface.create(interface_str)
# remove
remove_interface_ids = []
for remove_interface in interface_list_copy:
interface_id = remove_interface['interfaceid']
remove_interface_ids.append(interface_id)
if len(remove_interface_ids) > 0:
self._zapi.hostinterface.delete(remove_interface_ids)
except Exception as e:
self._module.fail_json(msg="Failed to update host %s: %s" % (host_name, e))
def delete_host(self, host_id, host_name):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.delete([host_id])
except Exception as e:
self._module.fail_json(msg="Failed to delete host %s: %s" % (host_name, e))
# get host by host name
def get_host_by_host_name(self, host_name):
host_list = self._zapi.host.get({'output': 'extend', 'selectInventory': 'extend', 'filter': {'host': [host_name]}})
if len(host_list) < 1:
self._module.fail_json(msg="Host not found: %s" % host_name)
else:
return host_list[0]
# get proxyid by proxy name
def get_proxyid_by_proxy_name(self, proxy_name):
proxy_list = self._zapi.proxy.get({'output': 'extend', 'filter': {'host': [proxy_name]}})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return int(proxy_list[0]['proxyid'])
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
if self.check_host_group_exist(group_names):
return self._zapi.hostgroup.get({'output': 'groupid', 'filter': {'name': group_names}})
# get host groups ids by host id
def get_group_ids_by_host_id(self, host_id):
return self._zapi.hostgroup.get({'output': 'groupid', 'hostids': host_id})
# get host templates by host id
def get_host_templates_by_host_id(self, host_id):
template_ids = []
template_list = self._zapi.template.get({'output': 'extend', 'hostids': host_id})
for template in template_list:
template_ids.append(template['templateid'])
return template_ids
# check the exist_interfaces whether it equals the interfaces or not
def check_interface_properties(self, exist_interface_list, interfaces):
interfaces_port_list = []
if interfaces is not None:
if len(interfaces) >= 1:
for interface in interfaces:
interfaces_port_list.append(str(interface['port']))
exist_interface_ports = []
if len(exist_interface_list) >= 1:
for exist_interface in exist_interface_list:
exist_interface_ports.append(str(exist_interface['port']))
if set(interfaces_port_list) != set(exist_interface_ports):
return True
for exist_interface in exist_interface_list:
exit_interface_port = str(exist_interface['port'])
for interface in interfaces:
interface_port = str(interface['port'])
if interface_port == exit_interface_port:
for key in interface.keys():
if str(exist_interface[key]) != str(interface[key]):
return True
return False
# get the status of host by host
def get_host_status_by_host(self, host):
return host['status']
# check all the properties before link or clear template
def check_all_properties(self, host_id, group_ids, status, interfaces, template_ids,
exist_interfaces, host, proxy_id, visible_name, description, host_name,
inventory_mode, inventory_zabbix, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, tls_connect, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
# get the existing host's groups
exist_host_groups = sorted(self.get_group_ids_by_host_id(host_id), key=lambda k: k['groupid'])
if sorted(group_ids, key=lambda k: k['groupid']) != exist_host_groups:
return True
# get the existing status
exist_status = self.get_host_status_by_host(host)
if int(status) != int(exist_status):
return True
# check the exist_interfaces whether it equals the interfaces or not
if self.check_interface_properties(exist_interfaces, interfaces):
return True
# get the existing templates
exist_template_ids = self.get_host_templates_by_host_id(host_id)
if set(list(template_ids)) != set(exist_template_ids):
return True
if int(host['proxy_hostid']) != int(proxy_id):
return True
# Check whether the visible_name has changed; Zabbix defaults to the technical hostname if not set.
if visible_name:
if host['name'] != visible_name:
return True
# Only compare description if it is given as a module parameter
if description:
if host['description'] != description:
return True
if inventory_mode:
if LooseVersion(self._zbx_api_version) <= LooseVersion('4.4.0'):
if host['inventory']:
if int(host['inventory']['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
elif inventory_mode != 'disabled':
return True
else:
if int(host['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
if inventory_zabbix:
proposed_inventory = copy.deepcopy(host['inventory'])
proposed_inventory.update(inventory_zabbix)
if proposed_inventory != host['inventory']:
return True
if tls_accept is not None and 'tls_accept' in host:
if int(host['tls_accept']) != tls_accept:
return True
if tls_psk_identity is not None and 'tls_psk_identity' in host:
if host['tls_psk_identity'] != tls_psk_identity:
return True
if tls_psk is not None and 'tls_psk' in host:
if host['tls_psk'] != tls_psk:
return True
if tls_issuer is not None and 'tls_issuer' in host:
if host['tls_issuer'] != tls_issuer:
return True
if tls_subject is not None and 'tls_subject' in host:
if host['tls_subject'] != tls_subject:
return True
if tls_connect is not None and 'tls_connect' in host:
if int(host['tls_connect']) != tls_connect:
return True
if ipmi_authtype is not None:
if int(host['ipmi_authtype']) != ipmi_authtype:
return True
if ipmi_privilege is not None:
if int(host['ipmi_privilege']) != ipmi_privilege:
return True
if ipmi_username is not None:
if host['ipmi_username'] != ipmi_username:
return True
if ipmi_password is not None:
if host['ipmi_password'] != ipmi_password:
return True
return False
# link or clear template of the host
def link_or_clear_template(self, host_id, template_id_list, tls_connect, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
# get host's exist template ids
exist_template_id_list = self.get_host_templates_by_host_id(host_id)
exist_template_ids = set(exist_template_id_list)
template_ids = set(template_id_list)
template_id_list = list(template_ids)
# get unlink and clear templates
templates_clear = exist_template_ids.difference(template_ids)
templates_clear_list = list(templates_clear)
request_str = {'hostid': host_id, 'templates': template_id_list, 'templates_clear': templates_clear_list,
'tls_connect': tls_connect, 'tls_accept': tls_accept, 'ipmi_authtype': ipmi_authtype,
'ipmi_privilege': ipmi_privilege, 'ipmi_username': ipmi_username, 'ipmi_password': ipmi_password}
if tls_psk_identity is not None:
request_str['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
request_str['tls_psk'] = tls_psk
if tls_issuer is not None:
request_str['tls_issuer'] = tls_issuer
if tls_subject is not None:
request_str['tls_subject'] = tls_subject
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to link template to host: %s" % e)
def inventory_mode_numeric(self, inventory_mode):
if inventory_mode == "automatic":
return int(1)
elif inventory_mode == "manual":
return int(0)
elif inventory_mode == "disabled":
return int(-1)
return inventory_mode
# Update the host inventory_mode
def update_inventory_mode(self, host_id, inventory_mode):
# nothing was set, do nothing
if not inventory_mode:
return
inventory_mode = self.inventory_mode_numeric(inventory_mode)
# watch for - https://support.zabbix.com/browse/ZBX-6033
request_str = {'hostid': host_id, 'inventory_mode': inventory_mode}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory_mode to host: %s" % e)
def update_inventory_zabbix(self, host_id, inventory):
if not inventory:
return
request_str = {'hostid': host_id, 'inventory': inventory}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory to host: %s" % e)
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
host_name=dict(type='str', required=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
host_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
status=dict(type='str', default="enabled", choices=['enabled', 'disabled']),
state=dict(type='str', default="present", choices=['present', 'absent']),
inventory_mode=dict(type='str', required=False, choices=['automatic', 'manual', 'disabled']),
ipmi_authtype=dict(type='int', default=None),
ipmi_privilege=dict(type='int', default=None),
ipmi_username=dict(type='str', required=False, default=None),
ipmi_password=dict(type='str', required=False, default=None, no_log=True),
tls_connect=dict(type='int', default=1),
tls_accept=dict(type='int', default=1),
tls_psk_identity=dict(type='str', required=False),
tls_psk=dict(type='str', required=False),
ca_cert=dict(type='str', required=False, aliases=['tls_issuer']),
tls_subject=dict(type='str', required=False),
inventory_zabbix=dict(type='dict', required=False),
timeout=dict(type='int', default=10),
interfaces=dict(type='list', required=False),
force=dict(type='bool', default=True),
proxy=dict(type='str', required=False),
visible_name=dict(type='str', required=False),
description=dict(type='str', required=False)
),
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
host_name = module.params['host_name']
visible_name = module.params['visible_name']
description = module.params['description']
host_groups = module.params['host_groups']
link_templates = module.params['link_templates']
inventory_mode = module.params['inventory_mode']
ipmi_authtype = module.params['ipmi_authtype']
ipmi_privilege = module.params['ipmi_privilege']
ipmi_username = module.params['ipmi_username']
ipmi_password = module.params['ipmi_password']
tls_connect = module.params['tls_connect']
tls_accept = module.params['tls_accept']
tls_psk_identity = module.params['tls_psk_identity']
tls_psk = module.params['tls_psk']
tls_issuer = module.params['ca_cert']
tls_subject = module.params['tls_subject']
inventory_zabbix = module.params['inventory_zabbix']
status = module.params['status']
state = module.params['state']
timeout = module.params['timeout']
interfaces = module.params['interfaces']
force = module.params['force']
proxy = module.params['proxy']
# convert enabled to 0; disabled to 1
status = 1 if status == "disabled" else 0
zbx = None
# login to zabbix
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except Exception as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
host = Host(module, zbx)
template_ids = []
if link_templates:
template_ids = host.get_template_ids(link_templates)
group_ids = []
if host_groups:
group_ids = host.get_group_ids_by_group_names(host_groups)
ip = ""
if interfaces:
# ensure interfaces are well-formed
for interface in interfaces:
if 'type' not in interface:
module.fail_json(msg="(interface) type needs to be specified for interface '%s'." % interface)
interfacetypes = {'agent': 1, 'snmp': 2, 'ipmi': 3, 'jmx': 4}
if interface['type'] in interfacetypes.keys():
interface['type'] = interfacetypes[interface['type']]
if interface['type'] < 1 or interface['type'] > 4:
module.fail_json(msg="Interface type can only be 1-4 for interface '%s'." % interface)
if 'useip' not in interface:
interface['useip'] = 0
if 'dns' not in interface:
if interface['useip'] == 0:
module.fail_json(msg="dns needs to be set if useip is 0 on interface '%s'." % interface)
interface['dns'] = ''
if 'ip' not in interface:
if interface['useip'] == 1:
module.fail_json(msg="ip needs to be set if useip is 1 on interface '%s'." % interface)
interface['ip'] = ''
if 'main' not in interface:
interface['main'] = 0
if 'port' in interface and not isinstance(interface['port'], str):
try:
interface['port'] = str(interface['port'])
except ValueError:
module.fail_json(msg="port should be convertable to string on interface '%s'." % interface)
if 'port' not in interface:
if interface['type'] == 1:
interface['port'] = "10050"
elif interface['type'] == 2:
interface['port'] = "161"
elif interface['type'] == 3:
interface['port'] = "623"
elif interface['type'] == 4:
interface['port'] = "12345"
if interface['type'] == 1:
ip = interface['ip']
# Use proxy specified, or set to 0
if proxy:
proxy_id = host.get_proxyid_by_proxy_name(proxy)
else:
proxy_id = 0
# check if host exist
is_host_exist = host.is_host_exist(host_name)
if is_host_exist:
# get host id by host name
zabbix_host_obj = host.get_host_by_host_name(host_name)
host_id = zabbix_host_obj['hostid']
# If proxy is not specified as a module parameter, use the existing setting
if proxy is None:
proxy_id = int(zabbix_host_obj['proxy_hostid'])
if state == "absent":
# remove host
host.delete_host(host_id, host_name)
module.exit_json(changed=True, result="Successfully delete host %s" % host_name)
else:
if not host_groups:
# if host_groups have not been specified when updating an existing host, just
# get the group_ids from the existing host without updating them.
group_ids = host.get_group_ids_by_host_id(host_id)
# get existing host's interfaces
exist_interfaces = host._zapi.hostinterface.get({'output': 'extend', 'hostids': host_id})
# if no interfaces were specified with the module, start with an empty list
if not interfaces:
interfaces = []
# When force=no is specified, append existing interfaces to interfaces to update. When
# no interfaces have been specified, copy existing interfaces as specified from the API.
# Do the same with templates and host groups.
if not force or not interfaces:
for interface in copy.deepcopy(exist_interfaces):
# remove values not used during hostinterface.add/update calls
for key in tuple(interface.keys()):
if key in ['interfaceid', 'hostid', 'bulk']:
interface.pop(key, None)
for index in interface.keys():
if index in ['useip', 'main', 'type']:
interface[index] = int(interface[index])
if interface not in interfaces:
interfaces.append(interface)
if not force or link_templates is None:
template_ids = list(set(template_ids + host.get_host_templates_by_host_id(host_id)))
if not force:
for group_id in host.get_group_ids_by_host_id(host_id):
if group_id not in group_ids:
group_ids.append(group_id)
# update host
if host.check_all_properties(host_id, group_ids, status, interfaces, template_ids,
exist_interfaces, zabbix_host_obj, proxy_id, visible_name,
description, host_name, inventory_mode, inventory_zabbix,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, tls_connect,
ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
host.update_host(host_name, group_ids, status, host_id,
interfaces, exist_interfaces, proxy_id, visible_name, description, tls_connect, tls_accept,
tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True,
result="Successfully update host %s (%s) and linked with template '%s'"
% (host_name, ip, link_templates))
else:
module.exit_json(changed=False)
else:
if state == "absent":
# the host is already deleted.
module.exit_json(changed=False)
if not group_ids:
module.fail_json(msg="Specify at least one group for creating host '%s'." % host_name)
if not interfaces or (interfaces and len(interfaces) == 0):
module.fail_json(msg="Specify at least one interface for creating host '%s'." % host_name)
# create host
host_id = host.add_host(host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True, result="Successfully added host %s (%s) and linked with template '%s'" % (
host_name, ip, link_templates))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,593 |
Ansible crashes on nxos_facts with virtual Nexus
|
##### SUMMARY
Ansible crashes when `gather_facts` is true for a playbook targeting virtual Nexus switches. I believe this is because it’s looking for information about fans but it’s virtual so there are no fans.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
nxos_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /Users/kbreit/development/gitlab/dev/network_configuration/ansible.cfg
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:38) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### OS / ENVIRONMENT
Control node is macOS
Target is NX-OS 9.3(3)
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Configure Access Devices
connection: network_cli
gather_facts: true
hosts: all
tasks:
- debug:
var: hostvars
```
##### EXPECTED RESULTS
I expect the playbook to execute, even if it’s a single task. It should collect facts.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
<e7csco0nxos03.datalinklabs.local> EXEC /bin/sh -c 'rm -f -r /Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/ > /dev/null 2>&1 && sleep 0'
fatal: [e7csco0nxos03.datalinklabs.local]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"nxos_facts": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"exception": "Traceback (most recent call last):\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_facts', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 188, in run_module\n fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 239, in <module>\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 230, in main\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/facts.py\", line 71, in get_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 124, in get_network_legacy_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 594, in populate\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 631, in parse_structured_fan_info\nKeyError: 'TABLE_faninfo'\n",
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_facts', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 188, in run_module\n fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 239, in <module>\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 230, in main\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/facts.py\", line 71, in get_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 124, in get_network_legacy_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 594, in populate\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 631, in parse_structured_fan_info\nKeyError: 'TABLE_faninfo'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1,
"warnings": [
"Platform darwin on host e7csco0nxos03.datalinklabs.local is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."
]
}
},
"msg": "The following modules failed to execute: nxos_facts\n"
}
```
|
https://github.com/ansible/ansible/issues/66593
|
https://github.com/ansible/ansible/pull/66866
|
72e1716f29fbd25f0aa98fdbe36eaa959ea805ed
|
bf65e7a3f6b89e0b22e30a9944fe75a37230844c
| 2020-01-18T04:38:45Z |
python
| 2020-01-30T15:23:21Z |
changelogs/fragments/66866_nxos_fan_facts.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,593 |
Ansible crashes on nxos_facts with virtual Nexus
|
##### SUMMARY
Ansible crashes when `gather_facts` is true for a playbook targeting virtual Nexus switches. I believe this is because it’s looking for information about fans but it’s virtual so there are no fans.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
nxos_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /Users/kbreit/development/gitlab/dev/network_configuration/ansible.cfg
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:38) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### OS / ENVIRONMENT
Control node is macOS
Target is NX-OS 9.3(3)
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Configure Access Devices
connection: network_cli
gather_facts: true
hosts: all
tasks:
- debug:
var: hostvars
```
##### EXPECTED RESULTS
I expect the playbook to execute, even if it’s a single task. It should collect facts.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
<e7csco0nxos03.datalinklabs.local> EXEC /bin/sh -c 'rm -f -r /Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/ > /dev/null 2>&1 && sleep 0'
fatal: [e7csco0nxos03.datalinklabs.local]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"nxos_facts": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"exception": "Traceback (most recent call last):\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_facts', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 188, in run_module\n fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 239, in <module>\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 230, in main\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/facts.py\", line 71, in get_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 124, in get_network_legacy_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 594, in populate\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 631, in parse_structured_fan_info\nKeyError: 'TABLE_faninfo'\n",
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/kbreit/.ansible/tmp/ansible-local-78117vavbd2gv/ansible-tmp-1579321879.577556-95130404538599/AnsiballZ_nxos_facts.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.nxos.nxos_facts', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 188, in run_module\n fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 82, in _run_module_code\n mod_name, mod_fname, mod_loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 239, in <module>\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/modules/network/nxos/nxos_facts.py\", line 230, in main\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/facts.py\", line 71, in get_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 124, in get_network_legacy_facts\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 594, in populate\n File \"/var/folders/mv/cqcg7q510t9b5bn580y3ksb80000gq/T/ansible_nxos_facts_payload_YMZD8m/ansible_nxos_facts_payload.zip/ansible/module_utils/network/nxos/facts/legacy/base.py\", line 631, in parse_structured_fan_info\nKeyError: 'TABLE_faninfo'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1,
"warnings": [
"Platform darwin on host e7csco0nxos03.datalinklabs.local is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."
]
}
},
"msg": "The following modules failed to execute: nxos_facts\n"
}
```
|
https://github.com/ansible/ansible/issues/66593
|
https://github.com/ansible/ansible/pull/66866
|
72e1716f29fbd25f0aa98fdbe36eaa959ea805ed
|
bf65e7a3f6b89e0b22e30a9944fe75a37230844c
| 2020-01-18T04:38:45Z |
python
| 2020-01-30T15:23:21Z |
lib/ansible/module_utils/network/nxos/facts/legacy/base.py
|
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
import platform
import re
from ansible.module_utils.network.nxos.nxos import run_commands, get_config, get_capabilities
from ansible.module_utils.network.nxos.utils.utils import get_interface_type, normalize_interface
from ansible.module_utils.six import iteritems
g_config = None
class FactsBase(object):
def __init__(self, module):
self.module = module
self.warnings = list()
self.facts = dict()
self.capabilities = get_capabilities(self.module)
def populate(self):
pass
def run(self, command, output='text'):
command_string = command
command = {
'command': command,
'output': output
}
resp = run_commands(self.module, [command], check_rc='retry_json')
try:
return resp[0]
except IndexError:
self.warnings.append('command %s failed, facts for this command will not be populated' % command_string)
return None
def get_config(self):
global g_config
if not g_config:
g_config = get_config(self.module)
return g_config
def transform_dict(self, data, keymap):
transform = dict()
for key, fact in keymap:
if key in data:
transform[fact] = data[key]
return transform
def transform_iterable(self, iterable, keymap):
for item in iterable:
yield self.transform_dict(item, keymap)
class Default(FactsBase):
def populate(self):
data = None
data = self.run('show version')
if data:
self.facts['serialnum'] = self.parse_serialnum(data)
data = self.run('show license host-id')
if data:
self.facts['license_hostid'] = self.parse_license_hostid(data)
self.facts.update(self.platform_facts())
def parse_serialnum(self, data):
match = re.search(r'Processor Board ID\s*(\S+)', data, re.M)
if match:
return match.group(1)
def platform_facts(self):
platform_facts = {}
resp = self.capabilities
device_info = resp['device_info']
platform_facts['system'] = device_info['network_os']
for item in ('model', 'image', 'version', 'platform', 'hostname'):
val = device_info.get('network_os_%s' % item)
if val:
platform_facts[item] = val
platform_facts['api'] = resp['network_api']
platform_facts['python_version'] = platform.python_version()
return platform_facts
def parse_license_hostid(self, data):
match = re.search(r'License hostid: VDH=(.+)$', data, re.M)
if match:
return match.group(1)
class Config(FactsBase):
def populate(self):
super(Config, self).populate()
self.facts['config'] = self.get_config()
class Features(FactsBase):
def populate(self):
super(Features, self).populate()
data = self.get_config()
if data:
features = []
for line in data.splitlines():
if line.startswith('feature'):
features.append(line.replace('feature', '').strip())
self.facts['features_enabled'] = features
class Hardware(FactsBase):
def populate(self):
data = self.run('dir')
if data:
self.facts['filesystems'] = self.parse_filesystems(data)
data = None
data = self.run('show system resources', output='json')
if data:
if isinstance(data, dict):
self.facts['memtotal_mb'] = int(data['memory_usage_total']) / 1024
self.facts['memfree_mb'] = int(data['memory_usage_free']) / 1024
else:
self.facts['memtotal_mb'] = self.parse_memtotal_mb(data)
self.facts['memfree_mb'] = self.parse_memfree_mb(data)
def parse_filesystems(self, data):
return re.findall(r'^Usage for (\S+)//', data, re.M)
def parse_memtotal_mb(self, data):
match = re.search(r'(\S+)K(\s+|)total', data, re.M)
if match:
memtotal = match.group(1)
return int(memtotal) / 1024
def parse_memfree_mb(self, data):
match = re.search(r'(\S+)K(\s+|)free', data, re.M)
if match:
memfree = match.group(1)
return int(memfree) / 1024
class Interfaces(FactsBase):
INTERFACE_MAP = frozenset([
('state', 'state'),
('desc', 'description'),
('eth_bw', 'bandwidth'),
('eth_duplex', 'duplex'),
('eth_speed', 'speed'),
('eth_mode', 'mode'),
('eth_hw_addr', 'macaddress'),
('eth_mtu', 'mtu'),
('eth_hw_desc', 'type')
])
INTERFACE_SVI_MAP = frozenset([
('svi_line_proto', 'state'),
('svi_bw', 'bandwidth'),
('svi_mac', 'macaddress'),
('svi_mtu', 'mtu'),
('type', 'type')
])
INTERFACE_IPV4_MAP = frozenset([
('eth_ip_addr', 'address'),
('eth_ip_mask', 'masklen')
])
INTERFACE_SVI_IPV4_MAP = frozenset([
('svi_ip_addr', 'address'),
('svi_ip_mask', 'masklen')
])
INTERFACE_IPV6_MAP = frozenset([
('addr', 'address'),
('prefix', 'subnet')
])
def ipv6_structure_op_supported(self):
data = self.capabilities
if data:
nxos_os_version = data['device_info']['network_os_version']
unsupported_versions = ['I2', 'F1', 'A8']
for ver in unsupported_versions:
if ver in nxos_os_version:
return False
return True
def populate(self):
self.facts['all_ipv4_addresses'] = list()
self.facts['all_ipv6_addresses'] = list()
self.facts['neighbors'] = {}
data = None
data = self.run('show interface', output='json')
if data:
if isinstance(data, dict):
self.facts['interfaces'] = self.populate_structured_interfaces(data)
else:
interfaces = self.parse_interfaces(data)
self.facts['interfaces'] = self.populate_interfaces(interfaces)
if self.ipv6_structure_op_supported():
data = self.run('show ipv6 interface', output='json')
else:
data = None
if data:
if isinstance(data, dict):
self.populate_structured_ipv6_interfaces(data)
else:
interfaces = self.parse_interfaces(data)
self.populate_ipv6_interfaces(interfaces)
data = self.run('show lldp neighbors', output='json')
if data:
if isinstance(data, dict):
self.facts['neighbors'].update(self.populate_structured_neighbors_lldp(data))
else:
self.facts['neighbors'].update(self.populate_neighbors(data))
data = self.run('show cdp neighbors detail', output='json')
if data:
if isinstance(data, dict):
self.facts['neighbors'].update(self.populate_structured_neighbors_cdp(data))
else:
self.facts['neighbors'].update(self.populate_neighbors_cdp(data))
self.facts['neighbors'].pop(None, None) # Remove null key
def populate_structured_interfaces(self, data):
interfaces = dict()
for item in data['TABLE_interface']['ROW_interface']:
name = item['interface']
intf = dict()
if 'type' in item:
intf.update(self.transform_dict(item, self.INTERFACE_SVI_MAP))
else:
intf.update(self.transform_dict(item, self.INTERFACE_MAP))
if 'eth_ip_addr' in item:
intf['ipv4'] = self.transform_dict(item, self.INTERFACE_IPV4_MAP)
self.facts['all_ipv4_addresses'].append(item['eth_ip_addr'])
if 'svi_ip_addr' in item:
intf['ipv4'] = self.transform_dict(item, self.INTERFACE_SVI_IPV4_MAP)
self.facts['all_ipv4_addresses'].append(item['svi_ip_addr'])
interfaces[name] = intf
return interfaces
def populate_structured_ipv6_interfaces(self, data):
try:
data = data['TABLE_intf']
if data:
if isinstance(data, dict):
data = [data]
for item in data:
name = item['ROW_intf']['intf-name']
intf = self.facts['interfaces'][name]
intf['ipv6'] = self.transform_dict(item, self.INTERFACE_IPV6_MAP)
try:
addr = item['ROW_intf']['addr']
except KeyError:
addr = item['ROW_intf']['TABLE_addr']['ROW_addr']['addr']
self.facts['all_ipv6_addresses'].append(addr)
else:
return ""
except TypeError:
return ""
def populate_structured_neighbors_lldp(self, data):
objects = dict()
data = data['TABLE_nbor']['ROW_nbor']
if isinstance(data, dict):
data = [data]
for item in data:
local_intf = normalize_interface(item['l_port_id'])
objects[local_intf] = list()
nbor = dict()
nbor['port'] = item['port_id']
nbor['host'] = nbor['sysname'] = item['chassis_id']
objects[local_intf].append(nbor)
return objects
def populate_structured_neighbors_cdp(self, data):
objects = dict()
data = data['TABLE_cdp_neighbor_detail_info']['ROW_cdp_neighbor_detail_info']
if isinstance(data, dict):
data = [data]
for item in data:
local_intf = item['intf_id']
objects[local_intf] = list()
nbor = dict()
nbor['port'] = item['port_id']
nbor['host'] = nbor['sysname'] = item['device_id']
objects[local_intf].append(nbor)
return objects
def parse_interfaces(self, data):
parsed = dict()
key = ''
for line in data.split('\n'):
if len(line) == 0:
continue
elif line.startswith('admin') or line[0] == ' ':
parsed[key] += '\n%s' % line
else:
match = re.match(r'^(\S+)', line)
if match:
key = match.group(1)
if not key.startswith('admin') or not key.startswith('IPv6 Interface'):
parsed[key] = line
return parsed
def populate_interfaces(self, interfaces):
facts = dict()
for key, value in iteritems(interfaces):
intf = dict()
if get_interface_type(key) == 'svi':
intf['state'] = self.parse_state(key, value, intf_type='svi')
intf['macaddress'] = self.parse_macaddress(value, intf_type='svi')
intf['mtu'] = self.parse_mtu(value, intf_type='svi')
intf['bandwidth'] = self.parse_bandwidth(value, intf_type='svi')
intf['type'] = self.parse_type(value, intf_type='svi')
if 'Internet Address' in value:
intf['ipv4'] = self.parse_ipv4_address(value, intf_type='svi')
facts[key] = intf
else:
intf['state'] = self.parse_state(key, value)
intf['description'] = self.parse_description(value)
intf['macaddress'] = self.parse_macaddress(value)
intf['mode'] = self.parse_mode(value)
intf['mtu'] = self.parse_mtu(value)
intf['bandwidth'] = self.parse_bandwidth(value)
intf['duplex'] = self.parse_duplex(value)
intf['speed'] = self.parse_speed(value)
intf['type'] = self.parse_type(value)
if 'Internet Address' in value:
intf['ipv4'] = self.parse_ipv4_address(value)
facts[key] = intf
return facts
def parse_state(self, key, value, intf_type='ethernet'):
match = None
if intf_type == 'svi':
match = re.search(r'line protocol is\s*(\S+)', value, re.M)
else:
match = re.search(r'%s is\s*(\S+)' % key, value, re.M)
if match:
return match.group(1)
def parse_macaddress(self, value, intf_type='ethernet'):
match = None
if intf_type == 'svi':
match = re.search(r'address is\s*(\S+)', value, re.M)
else:
match = re.search(r'address:\s*(\S+)', value, re.M)
if match:
return match.group(1)
def parse_mtu(self, value, intf_type='ethernet'):
match = re.search(r'MTU\s*(\S+)', value, re.M)
if match:
return match.group(1)
def parse_bandwidth(self, value, intf_type='ethernet'):
match = re.search(r'BW\s*(\S+)', value, re.M)
if match:
return match.group(1)
def parse_type(self, value, intf_type='ethernet'):
match = None
if intf_type == 'svi':
match = re.search(r'Hardware is\s*(\S+)', value, re.M)
else:
match = re.search(r'Hardware:\s*(.+),', value, re.M)
if match:
return match.group(1)
def parse_description(self, value, intf_type='ethernet'):
match = re.search(r'Description: (.+)$', value, re.M)
if match:
return match.group(1)
def parse_mode(self, value, intf_type='ethernet'):
match = re.search(r'Port mode is (\S+)', value, re.M)
if match:
return match.group(1)
def parse_duplex(self, value, intf_type='ethernet'):
match = re.search(r'(\S+)-duplex', value, re.M)
if match:
return match.group(1)
def parse_speed(self, value, intf_type='ethernet'):
match = re.search(r'duplex, (.+)$', value, re.M)
if match:
return match.group(1)
def parse_ipv4_address(self, value, intf_type='ethernet'):
ipv4 = {}
match = re.search(r'Internet Address is (.+)$', value, re.M)
if match:
address = match.group(1)
addr = address.split('/')[0]
ipv4['address'] = address.split('/')[0]
ipv4['masklen'] = address.split('/')[1]
self.facts['all_ipv4_addresses'].append(addr)
return ipv4
def populate_neighbors(self, data):
objects = dict()
# if there are no neighbors the show command returns
# ERROR: No neighbour information
if data.startswith('ERROR'):
return dict()
regex = re.compile(r'(\S+)\s+(\S+)\s+\d+\s+\w+\s+(\S+)')
for item in data.split('\n')[4:-1]:
match = regex.match(item)
if match:
nbor = dict()
nbor['host'] = nbor['sysname'] = match.group(1)
nbor['port'] = match.group(3)
local_intf = normalize_interface(match.group(2))
if local_intf not in objects:
objects[local_intf] = []
objects[local_intf].append(nbor)
return objects
def populate_neighbors_cdp(self, data):
facts = dict()
for item in data.split('----------------------------------------'):
if item == '':
continue
local_intf = self.parse_lldp_intf(item)
if local_intf not in facts:
facts[local_intf] = list()
fact = dict()
fact['port'] = self.parse_lldp_port(item)
fact['sysname'] = self.parse_lldp_sysname(item)
facts[local_intf].append(fact)
return facts
def parse_lldp_intf(self, data):
match = re.search(r'Interface:\s*(\S+)', data, re.M)
if match:
return match.group(1).strip(',')
def parse_lldp_port(self, data):
match = re.search(r'Port ID \(outgoing port\):\s*(\S+)', data, re.M)
if match:
return match.group(1)
def parse_lldp_sysname(self, data):
match = re.search(r'Device ID:(.+)$', data, re.M)
if match:
return match.group(1)
def populate_ipv6_interfaces(self, interfaces):
facts = dict()
for key, value in iteritems(interfaces):
intf = dict()
intf['ipv6'] = self.parse_ipv6_address(value)
facts[key] = intf
def parse_ipv6_address(self, value):
ipv6 = {}
match_addr = re.search(r'IPv6 address:\s*(\S+)', value, re.M)
if match_addr:
addr = match_addr.group(1)
ipv6['address'] = addr
self.facts['all_ipv6_addresses'].append(addr)
match_subnet = re.search(r'IPv6 subnet:\s*(\S+)', value, re.M)
if match_subnet:
ipv6['subnet'] = match_subnet.group(1)
return ipv6
class Legacy(FactsBase):
# facts from nxos_facts 2.1
VERSION_MAP = frozenset([
('host_name', '_hostname'),
('kickstart_ver_str', '_os'),
('chassis_id', '_platform')
])
MODULE_MAP = frozenset([
('model', 'model'),
('modtype', 'type'),
('ports', 'ports'),
('status', 'status')
])
FAN_MAP = frozenset([
('fanname', 'name'),
('fanmodel', 'model'),
('fanhwver', 'hw_ver'),
('fandir', 'direction'),
('fanstatus', 'status')
])
POWERSUP_MAP = frozenset([
('psmodel', 'model'),
('psnum', 'number'),
('ps_status', 'status'),
('ps_status_3k', 'status'),
('actual_out', 'actual_output'),
('actual_in', 'actual_in'),
('total_capa', 'total_capacity'),
('input_type', 'input_type'),
('watts', 'watts'),
('amps', 'amps')
])
def populate(self):
data = None
data = self.run('show version', output='json')
if data:
if isinstance(data, dict):
self.facts.update(self.transform_dict(data, self.VERSION_MAP))
else:
self.facts['hostname'] = self.parse_hostname(data)
self.facts['os'] = self.parse_os(data)
self.facts['platform'] = self.parse_platform(data)
data = self.run('show interface', output='json')
if data:
if isinstance(data, dict):
self.facts['interfaces_list'] = self.parse_structured_interfaces(data)
else:
self.facts['interfaces_list'] = self.parse_interfaces(data)
data = self.run('show vlan brief', output='json')
if data:
if isinstance(data, dict):
self.facts['vlan_list'] = self.parse_structured_vlans(data)
else:
self.facts['vlan_list'] = self.parse_vlans(data)
data = self.run('show module', output='json')
if data:
if isinstance(data, dict):
self.facts['module'] = self.parse_structured_module(data)
else:
self.facts['module'] = self.parse_module(data)
data = self.run('show environment fan', output='json')
if data:
if isinstance(data, dict):
self.facts['fan_info'] = self.parse_structured_fan_info(data)
else:
self.facts['fan_info'] = self.parse_fan_info(data)
data = self.run('show environment power', output='json')
if data:
if isinstance(data, dict):
self.facts['power_supply_info'] = self.parse_structured_power_supply_info(data)
else:
self.facts['power_supply_info'] = self.parse_power_supply_info(data)
def parse_structured_interfaces(self, data):
objects = list()
for item in data['TABLE_interface']['ROW_interface']:
objects.append(item['interface'])
return objects
def parse_structured_vlans(self, data):
objects = list()
data = data['TABLE_vlanbriefxbrief']['ROW_vlanbriefxbrief']
if isinstance(data, dict):
objects.append(data['vlanshowbr-vlanid-utf'])
elif isinstance(data, list):
for item in data:
objects.append(item['vlanshowbr-vlanid-utf'])
return objects
def parse_structured_module(self, data):
data = data['TABLE_modinfo']['ROW_modinfo']
if isinstance(data, dict):
data = [data]
objects = list(self.transform_iterable(data, self.MODULE_MAP))
return objects
def parse_structured_fan_info(self, data):
objects = list()
if data.get('fandetails'):
data = data['fandetails']['TABLE_faninfo']['ROW_faninfo']
elif data.get('fandetails_3k'):
data = data['fandetails_3k']['TABLE_faninfo']['ROW_faninfo']
else:
return objects
objects = list(self.transform_iterable(data, self.FAN_MAP))
return objects
def parse_structured_power_supply_info(self, data):
if data.get('powersup').get('TABLE_psinfo_n3k'):
fact = data['powersup']['TABLE_psinfo_n3k']['ROW_psinfo_n3k']
else:
if isinstance(data['powersup']['TABLE_psinfo'], list):
fact = []
for i in data['powersup']['TABLE_psinfo']:
fact.append(i['ROW_psinfo'])
else:
fact = data['powersup']['TABLE_psinfo']['ROW_psinfo']
objects = list(self.transform_iterable(fact, self.POWERSUP_MAP))
return objects
def parse_hostname(self, data):
match = re.search(r'\s+Device name:\s+(\S+)', data, re.M)
if match:
return match.group(1)
def parse_os(self, data):
match = re.search(r'\s+system:\s+version\s*(\S+)', data, re.M)
if match:
return match.group(1)
else:
match = re.search(r'\s+kickstart:\s+version\s*(\S+)', data, re.M)
if match:
return match.group(1)
def parse_platform(self, data):
match = re.search(r'Hardware\n\s+cisco\s+(\S+\s+\S+)', data, re.M)
if match:
return match.group(1)
def parse_interfaces(self, data):
objects = list()
for line in data.split('\n'):
if len(line) == 0:
continue
elif line.startswith('admin') or line[0] == ' ':
continue
else:
match = re.match(r'^(\S+)', line)
if match:
intf = match.group(1)
if get_interface_type(intf) != 'unknown':
objects.append(intf)
return objects
def parse_vlans(self, data):
objects = list()
for line in data.splitlines():
if line == '':
continue
if line[0].isdigit():
vlan = line.split()[0]
objects.append(vlan)
return objects
def parse_module(self, data):
objects = list()
for line in data.splitlines():
if line == '':
break
if line[0].isdigit():
obj = {}
match_port = re.search(r'\d\s*(\d*)', line, re.M)
if match_port:
obj['ports'] = match_port.group(1)
match = re.search(r'\d\s*\d*\s*(.+)$', line, re.M)
if match:
l = match.group(1).split(' ')
items = list()
for item in l:
if item == '':
continue
items.append(item.strip())
if items:
obj['type'] = items[0]
obj['model'] = items[1]
obj['status'] = items[2]
objects.append(obj)
return objects
def parse_fan_info(self, data):
objects = list()
for l in data.splitlines():
if '-----------------' in l or 'Status' in l:
continue
line = l.split()
if len(line) > 1:
obj = {}
obj['name'] = line[0]
obj['model'] = line[1]
obj['hw_ver'] = line[-2]
obj['status'] = line[-1]
objects.append(obj)
return objects
def parse_power_supply_info(self, data):
objects = list()
for l in data.splitlines():
if l == '':
break
if l[0].isdigit():
obj = {}
line = l.split()
obj['model'] = line[1]
obj['number'] = line[0]
obj['status'] = line[-1]
objects.append(obj)
return objects
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,610 |
Role Variables leak to other Roles when using vars
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The documentation for roles (https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html) state that we can pass variable to roles using the following syntax:
```
---
- hosts: webservers
roles:
- common
- role: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
- role: foo_app_instance
vars:
dir: '/opt/b'
app_port: 5001
```
but if the role contains a variable that can be omitted ( |default(omit) ), and this variable is defined in another role, the variable will take the other role value even if it has not been defined in this role.
Edit:
This is also the case for any default value.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
core
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
AWX 9.1.0.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
with the following role:
```yaml
---
# tasks file for myRole
- name: print debug
debug:
msg: "{{ message | default(omit)}}"
```
and the following playbook:
```yaml
---
- name: POC
hosts: all
roles:
- role: myRole
vars:
message: "my message"
- role: myRole
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
TASK [myRole : print debug] ****************************************************
task path: /tmp/awx_57_w8e0389h/project/roles/myRole/tasks/main.yml:4
ok: [kimsufi.pandore2015.fr] => {
"msg": "my message"
}
TASK [myRole : print debug] ****************************************************
task path: /tmp/awx_59_mvl8pj9p/project/roles/myRole/tasks/main.yml:4
ok: [kimsufi.pandore2015.fr] => {
"msg": "Hello world!"
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [myRole : print debug] ****************************************************
task path: /tmp/awx_57_w8e0389h/project/roles/myRole/tasks/main.yml:4
ok: [kimsufi.pandore2015.fr] => {
"msg": "my message"
}
TASK [myRole : print debug] ****************************************************
task path: /tmp/awx_57_w8e0389h/project/roles/myRole/tasks/main.yml:4
ok: [kimsufi.pandore2015.fr] => {
"msg": "my message"
}
```
|
https://github.com/ansible/ansible/issues/66610
|
https://github.com/ansible/ansible/pull/66907
|
adf73d47ec7440ad7c56dd755b76c3c329079a89
|
c8568a5c9431b6d3770d555f865b2d88a294e3d8
| 2020-01-19T22:31:20Z |
python
| 2020-01-30T16:08:24Z |
docs/docsite/rst/user_guide/playbooks_reuse_roles.rst
|
.. _playbooks_reuse_roles:
*****
Roles
*****
Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily re-use them and share them with other users.
.. contents::
:local:
Role directory structure
========================
An Ansible role has a defined directory structure with seven main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use. For example:
.. code-block:: text
# playbooks
site.yml
webservers.yml
fooservers.yml
roles/
common/
tasks/
handlers/
files/
templates/
vars/
defaults/
meta/
webservers/
tasks/
defaults/
meta/
Each directory within a role must contain a ``main.yml`` file with relevant content:
- ``tasks/main.yml`` - the main list of tasks that the role executes.
- ``handlers/main.yml`` - handlers, which may be used within or outside this role.
- ``defaults/main.yml`` - default variables for the role (see :ref:`playbooks_variables` for more information). These variables have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.
- ``vars/main.yml`` - other variables for the role (see :ref:`playbooks_variables` for more information).
- ``files/main.yml`` - files that the role deploys.
- ``templates/main.yml`` - templates that the role deploys.
- ``meta/main.yml`` - metadata for the role, including role dependencies.
You can add other YAML files in some directories. For example, you can place platform-specific tasks in separate files and refer to them in the ``tasks/main.yml`` file:
.. code-block:: yaml
# roles/example/tasks/main.yml
- name: install the correct web server for RHEL
import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- name: install the correct web server for debian
import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
# roles/example/tasks/redhat.yml
- install web server
yum:
name: "httpd"
state: present
# roles/example/tasks/debian.yml
- install web server
apt:
name: "apache2"
state: present
Roles may also include modules and other plugin types in a directory called ``library``. For more information, please refer to :ref:`embedding_modules_and_plugins_in_roles` below.
.. _role_search_path:
Storing and finding roles
=========================
By default, Ansible looks for roles in two locations:
- in a directory called ``roles/``, relative to the playbook file
- in ``/etc/ansible/roles``
If you store your roles in a different location, set the :ref:`roles_path <DEFAULT_ROLES_PATH>` configuration option so Ansible can find your roles. Checking shared roles into a single location makes them easier to use in multiple playbooks. See :ref:`intro_configuration` for details about managing settings in ansible.cfg.
Alternatively, you can call a role with a fully qualified path:
.. code-block:: yaml
---
- hosts: webservers
roles:
- role: '/path/to/my/roles/common'
Using roles
===========
You can use roles in three ways:
- at the play level with the ``roles`` option,
- at the tasks level with ``include_role``, or
- at the tasks level with ``import_role``
.. _roles_keyword:
Using roles at the play level
-----------------------------
The classic (original) way to use roles is with the ``roles`` option for a given play:
.. code-block:: yaml
---
- hosts: webservers
roles:
- common
- webservers
When you use the ``roles`` option at the play level, for each role 'x':
- If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
- If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
- If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
- If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
- If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
- Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
When you use the ``roles`` option at the play level, Ansible treats the roles as static imports and processes them during playbook parsing. Ansible executes your playbook in this order:
- Any ``pre_tasks`` defined in the play.
- Any handlers triggered by pre_tasks.
- Each role listed in ``roles:``, in the order listed. Any role dependencies defined in the roles ``meta/main.yml`` run first, subject to tag filtering and conditionals. See :ref:`role_dependencies` for more details.
- Any ``tasks`` defined in the play.
- Any handlers triggered by the roles or tasks.
- Any ``post_tasks`` defined in the play.
- Any handlers triggered by post_tasks.
.. note::
If using tags with tasks in a role, be sure to also tag your pre_tasks, post_tasks, and role dependencies and pass those along as well, especially if the pre/post tasks and role dependencies are used for monitoring outage window control or load balancing. See :ref:`tags` for details on adding and using tags.
You can pass other keywords to the ``roles`` option:
.. code-block:: yaml
---
- hosts: webservers
roles:
- common
- role: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
- role: foo_app_instance
vars:
dir: '/opt/b'
app_port: 5001
tags: typeB
When you add a tag to the ``role`` option, Ansible applies the tag to ALL tasks within the role.
Including roles: dynamic re-use
-------------------------------
You can re-use roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``. While roles added in a ``roles`` section run before any other tasks in a playbook, included roles run in the order they are defined. If there are other tasks before an ``include_role`` task, the other tasks will run first.
To include a role:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- debug:
msg: "this task runs before the example role"
- include_role:
name: example
- debug:
msg: "this task runs after the example role"
You can pass other keywords, including variables and tags, when including roles:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- include_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
...
When you add a :ref:`tag <tags>` to an ``include_role`` task, Ansible applies the tag `only` to the include itself. This means you can pass ``--tags`` to run only selected tasks from the role, if those tasks themselves have the same tag as the include statement. See :ref:`selective_reuse` for details.
You can conditionally include a role:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- include_role:
name: some_role
when: "ansible_facts['os_family'] == 'RedHat'"
Importing roles: static re-use
------------------------------
You can re-use roles statically anywhere in the ``tasks`` section of a play using ``import_role``. The behavior is the same as using the ``roles`` keyword. For example:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- debug:
msg: "before we run our role"
- import_role:
name: example
- debug:
msg: "after we ran our role"
You can pass other keywords, including variables and tags, when importing roles:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- import_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
...
When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details.
Running a role multiple times in one playbook
=============================================
Ansible only executes each role once, even if you define it multiple times, unless the parameters defined on the role are different for each definition. For example, Ansible only runs the role ``foo`` once in a play like this:
.. code-block:: yaml
---
- hosts: webservers
roles:
- foo
- bar
- foo
You have two options to force Ansible to run a role more than once:
#. Pass different parameters in each role definition.
#. Add ``allow_duplicates: true`` to the ``meta/main.yml`` file for the role.
Example 1 - passing different parameters:
.. code-block:: yaml
---
- hosts: webservers
roles:
- role: foo
vars:
message: "first"
- { role: foo, vars: { message: "second" } }
In this example, because each role definition has different parameters, Ansible runs ``foo`` twice.
Example 2 - using ``allow_duplicates: true``:
.. code-block:: yaml
# playbook.yml
---
- hosts: webservers
roles:
- foo
- foo
# roles/foo/meta/main.yml
---
allow_duplicates: true
In this example, Ansible runs ``foo`` twice because we have explicitly enabled it to do so.
.. _role_dependencies:
Using role dependencies
=======================
Role dependencies let you automatically pull in other roles when using a role. Ansible does not execute role dependencies when you include or import a role. You must use the ``roles`` keyword if you want Ansible to execute role dependencies.
Role dependencies are stored in the ``meta/main.yml`` file within the role directory. This file should contain a list of roles and parameters to insert before the specified role. For example:
.. code-block:: yaml
# roles/myapp/meta/main.yml
---
dependencies:
- role: common
vars:
some_parameter: 3
- role: apache
vars:
apache_port: 80
- role: postgres
vars:
dbname: blarg
other_parameter: 12
Ansible always executes role dependencies before the role that includes them. Ansible executes recursive role dependencies as well. If one role depends on a second role, and the second role depends on a third role, Ansible executes the third role, then the second role, then the first role.
Running role dependencies multiple times
----------------------------------------
Ansible treats duplicate role dependencies like duplicate roles listed under ``roles:``: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters defined on the role are different for each definition. If two roles in a playbook both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters or use ``allow_duplicates: true`` in the dependent (third) role. See :ref:`Galaxy role dependencies <galaxy_dependencies>` for more details.
For example, a role named ``car`` depends on a role named ``wheel`` as follows:
.. code-block:: yaml
---
dependencies:
- role: wheel
vars:
n: 1
- role: wheel
vars:
n: 2
- role: wheel
vars:
n: 3
- role: wheel
vars:
n: 4
And the ``wheel`` role depends on two roles: ``tire`` and ``brake``. The ``meta/main.yml`` for wheel would then contain the following:
.. code-block:: yaml
---
dependencies:
- role: tire
- role: brake
And the ``meta/main.yml`` for ``tire`` and ``brake`` would contain the following:
.. code-block:: yaml
---
allow_duplicates: true
The resulting order of execution would be as follows:
.. code-block:: text
tire(n=1)
brake(n=1)
wheel(n=1)
tire(n=2)
brake(n=2)
wheel(n=2)
...
car
To use ``allow_duplicates: true`` with role dependencies, you must specify it for the dependent role, not for the parent role. In the example above, ``allow_duplicates: true`` appears in the ``meta/main.yml`` of the ``tire`` and ``brake`` roles. The ``wheel`` role does not require ``allow_duplicates: true``, because each instance defined by ``car`` uses different parameter values.
.. note::
See :ref:`playbooks_variables` for details on how Ansible chooses among variable values defined in different places (variable inheritance and scope).
.. _embedding_modules_and_plugins_in_roles:
Embedding modules and plugins in roles
======================================
If you write a custom module (see :ref:`developing_modules`) or a plugin (see :ref:`developing_plugins`), you might wish to distribute it as part of a role. For example, if you write a module that helps configure your company's internal software, and you want other people in your organization to use this module, but you do not want to tell everyone how to configure their Ansible library path, you can include the module in your internal_config role.
Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library'. In this 'library' directory, then include the module directly inside of it.
Assuming you had this:
.. code-block:: text
roles/
my_custom_modules/
library/
module1
module2
The module will be usable in the role itself, as well as any roles that are called *after* this role, as follows:
.. code-block:: yaml
---
- hosts: webservers
roles:
- my_custom_modules
- some_other_role_using_my_custom_modules
- yet_another_role_using_my_custom_modules
If necessary, you can also embed a module in a role to modify a module in Ansible's core distribution. For example, you can use the development version of a particular module before it is released in production releases by copying the module and embedding the copy in a role. Use this approach with caution, as API signatures may change in core components, and this workaround is not guaranteed to work.
The same mechanism can be used to embed and distribute plugins in a role, using the same schema. For example, for a filter plugin:
.. code-block:: text
roles/
my_custom_filter/
filter_plugins
filter1
filter2
These filters can then be used in a Jinja template in any role called after 'my_custom_filter'.
Sharing roles: Ansible Galaxy
=============================
`Ansible Galaxy <https://galaxy.ansible.com>`_ is a free site for finding, downloading, rating, and reviewing all kinds of community-developed Ansible roles and can be a great way to get a jumpstart on your automation projects.
The client ``ansible-galaxy`` is included in Ansible. The Galaxy client allows you to download roles from Ansible Galaxy, and also provides an excellent default framework for creating your own roles.
Read the `Ansible Galaxy documentation <https://galaxy.ansible.com/docs/>`_ page for more information
.. seealso::
:ref:`ansible_galaxy`
How to create new roles, share roles on Galaxy, role management
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`working_with_playbooks`
Review the basic Playbook language features
:ref:`playbooks_best_practices`
Tips for managing playbooks in the real world
:ref:`playbooks_variables`
Variables in playbooks
:ref:`playbooks_conditionals`
Conditionals in playbooks
:ref:`playbooks_loops`
Loops in playbooks
:ref:`tags`
Using tags to select or skip roles/tasks in long playbooks
:ref:`all_modules`
List of available modules
:ref:`developing_modules`
Extending Ansible by writing your own modules
`GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the GitHub project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,974 |
New Percona/MySQL 8 privileges with underscore is not working.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
New Percona/MySQL 8 privileges with underscore is not working.
the following error is coming
`MSG: invalid privileges string: Invalid privileges specified: frozenset({'BACKUP_ADMIN'})`
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mysql_user module
most likely the new privileges needs to be added to lib/ansible/modules/database/mysql/mysql_user.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66974
|
https://github.com/ansible/ansible/pull/66995
|
aad286b403746c6e44ab760b2879fd36aaaf3ebd
|
16ebeda86d63dc4e693e259a8ad96dc664cdbf1c
| 2020-01-31T08:37:23Z |
python
| 2020-01-31T19:44:03Z |
changelogs/fragments/66974-mysql_user_doesnt_support_privs_with_underscore.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,974 |
New Percona/MySQL 8 privileges with underscore is not working.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
New Percona/MySQL 8 privileges with underscore is not working.
the following error is coming
`MSG: invalid privileges string: Invalid privileges specified: frozenset({'BACKUP_ADMIN'})`
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mysql_user module
most likely the new privileges needs to be added to lib/ansible/modules/database/mysql/mysql_user.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66974
|
https://github.com/ansible/ansible/pull/66995
|
aad286b403746c6e44ab760b2879fd36aaaf3ebd
|
16ebeda86d63dc4e693e259a8ad96dc664cdbf1c
| 2020-01-31T08:37:23Z |
python
| 2020-01-31T19:44:03Z |
lib/ansible/modules/database/mysql/mysql_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Mark Theunissen <[email protected]>
# Sponsored by Four Kitchens http://fourkitchens.com.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: mysql_user
short_description: Adds or removes a user from a MySQL database
description:
- Adds or removes a user from a MySQL database.
version_added: "0.6"
options:
name:
description:
- Name of the user (role) to add or remove.
type: str
required: true
password:
description:
- Set the user's password..
type: str
encrypted:
description:
- Indicate that the 'password' field is a `mysql_native_password` hash.
type: bool
default: no
version_added: "2.0"
host:
description:
- The 'host' part of the MySQL username.
type: str
default: localhost
host_all:
description:
- Override the host option, making ansible apply changes to all hostnames for a given user.
- This option cannot be used when creating users.
type: bool
default: no
version_added: "2.1"
priv:
description:
- "MySQL privileges string in the format: C(db.table:priv1,priv2)."
- "Multiple privileges can be specified by separating each one using
a forward slash: C(db.table:priv/db.table:priv)."
- The format is based on MySQL C(GRANT) statement.
- Database and table names can be quoted, MySQL-style.
- If column privileges are used, the C(priv1,priv2) part must be
exactly as returned by a C(SHOW GRANT) statement. If not followed,
the module will always report changes. It includes grouping columns
by permission (C(SELECT(col1,col2)) instead of C(SELECT(col1),SELECT(col2))).
- Can be passed as a dictionary (see the examples).
type: raw
append_privs:
description:
- Append the privileges defined by priv to the existing ones for this
user instead of overwriting existing ones.
type: bool
default: no
version_added: "1.4"
sql_log_bin:
description:
- Whether binary logging should be enabled or disabled for the connection.
type: bool
default: yes
version_added: "2.1"
state:
description:
- Whether the user should exist.
- When C(absent), removes the user.
type: str
choices: [ absent, present ]
default: present
check_implicit_admin:
description:
- Check if mysql allows login as root/nopassword before trying supplied credentials.
type: bool
default: no
version_added: "1.3"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "2.0"
plugin:
description:
- User's plugin to authenticate (``CREATE USER user IDENTIFIED WITH plugin``).
type: str
version_added: '2.10'
plugin_hash_string:
description:
- User's plugin hash string (``CREATE USER user IDENTIFIED WITH plugin AS plugin_hash_string``).
type: str
version_added: '2.10'
plugin_auth_string:
description:
- User's plugin auth_string (``CREATE USER user IDENTIFIED WITH plugin BY plugin_auth_string``).
type: str
version_added: '2.10'
notes:
- "MySQL server installs with default login_user of 'root' and no password. To secure this user
as part of an idempotent playbook, you must create at least two tasks: the first must change the root user's password,
without providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing
the new root credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from
the file."
- Currently, there is only support for the `mysql_native_password` encrypted password hash module.
seealso:
- module: mysql_info
- name: MySQL access control and account management reference
description: Complete reference of the MySQL access control and account management documentation.
link: https://dev.mysql.com/doc/refman/8.0/en/access-control.html
author:
- Jonathan Mainguy (@Jmainguy)
- Benjamin Malynovytch (@bmalynovytch)
- Lukasz Tomaszkiewicz (@tomaszkiewicz)
extends_documentation_fragment: mysql
'''
EXAMPLES = r'''
- name: Removes anonymous user account for localhost
mysql_user:
name: ''
host: localhost
state: absent
- name: Removes all anonymous user accounts
mysql_user:
name: ''
host_all: yes
state: absent
- name: Create database user with name 'bob' and password '12345' with all database privileges
mysql_user:
name: bob
password: 12345
priv: '*.*:ALL'
state: present
- name: Create database user using hashed password with all database privileges
mysql_user:
name: bob
password: '*EE0D72C1085C46C5278932678FBE2C6A782821B4'
encrypted: yes
priv: '*.*:ALL'
state: present
- name: Create database user with password and all database privileges and 'WITH GRANT OPTION'
mysql_user:
name: bob
password: 12345
priv: '*.*:ALL,GRANT'
state: present
- name: Create user with password, all database privileges and 'WITH GRANT OPTION' in db1 and db2
mysql_user:
state: present
name: bob
password: 12345dd
priv:
'db1.*': 'ALL,GRANT'
'db2.*': 'ALL,GRANT'
# Note that REQUIRESSL is a special privilege that should only apply to *.* by itself.
- name: Modify user to require SSL connections.
mysql_user:
name: bob
append_privs: yes
priv: '*.*:REQUIRESSL'
state: present
- name: Ensure no user named 'sally'@'localhost' exists, also passing in the auth credentials.
mysql_user:
login_user: root
login_password: 123456
name: sally
state: absent
- name: Ensure no user named 'sally' exists at all
mysql_user:
name: sally
host_all: yes
state: absent
- name: Specify grants composed of more than one word
mysql_user:
name: replication
password: 12345
priv: "*.*:REPLICATION CLIENT"
state: present
- name: Revoke all privileges for user 'bob' and password '12345'
mysql_user:
name: bob
password: 12345
priv: "*.*:USAGE"
state: present
# Example privileges string format
# mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanotherdb.*:ALL
- name: Example using login_unix_socket to connect to server
mysql_user:
name: root
password: abc123
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Example of skipping binary logging while adding user 'bob'
mysql_user:
name: bob
password: 12345
priv: "*.*:USAGE"
state: present
sql_log_bin: no
- name: Create user 'bob' authenticated with plugin 'AWSAuthenticationPlugin'
mysql_user:
name: bob
plugin: AWSAuthenticationPlugin
plugin_hash_string: RDS
priv: '*.*:ALL'
state: present
# Example .my.cnf file for setting the root password
# [client]
# user=root
# password=n<_665{vS43y
'''
import re
import string
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.database import SQLParseError
from ansible.module_utils.mysql import mysql_connect, mysql_driver, mysql_driver_fail_msg
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native
VALID_PRIVS = frozenset(('CREATE', 'DROP', 'GRANT', 'GRANT OPTION',
'LOCK TABLES', 'REFERENCES', 'EVENT', 'ALTER',
'DELETE', 'INDEX', 'INSERT', 'SELECT', 'UPDATE',
'CREATE TEMPORARY TABLES', 'TRIGGER', 'CREATE VIEW',
'SHOW VIEW', 'ALTER ROUTINE', 'CREATE ROUTINE',
'EXECUTE', 'FILE', 'CREATE TABLESPACE', 'CREATE USER',
'PROCESS', 'PROXY', 'RELOAD', 'REPLICATION CLIENT',
'REPLICATION SLAVE', 'SHOW DATABASES', 'SHUTDOWN',
'SUPER', 'ALL', 'ALL PRIVILEGES', 'USAGE', 'REQUIRESSL',
'CREATE ROLE', 'DROP ROLE', 'APPLICATION PASSWORD ADMIN',
'AUDIT ADMIN', 'BACKUP ADMIN', 'BINLOG ADMIN',
'BINLOG ENCRYPTION ADMIN', 'CONNECTION ADMIN',
'ENCRYPTION KEY ADMIN', 'FIREWALL ADMIN', 'FIREWALL USER',
'GROUP REPLICATION ADMIN', 'PERSIST RO VARIABLES ADMIN',
'REPLICATION SLAVE ADMIN', 'RESOURCE GROUP ADMIN',
'RESOURCE GROUP USER', 'ROLE ADMIN', 'SET USER ID',
'SESSION VARIABLES ADMIN', 'SYSTEM VARIABLES ADMIN',
'VERSION TOKEN ADMIN', 'XA RECOVER ADMIN',
'LOAD FROM S3', 'SELECT INTO S3'))
class InvalidPrivsError(Exception):
pass
# ===========================================
# MySQL module specific support methods.
#
# User Authentication Management changed in MySQL 5.7 and MariaDB 10.2.0
def use_old_user_mgmt(cursor):
cursor.execute("SELECT VERSION()")
result = cursor.fetchone()
version_str = result[0]
version = version_str.split('.')
if 'mariadb' in version_str.lower():
# Prior to MariaDB 10.2
if int(version[0]) * 1000 + int(version[1]) < 10002:
return True
else:
return False
else:
# Prior to MySQL 5.7
if int(version[0]) * 1000 + int(version[1]) < 5007:
return True
else:
return False
def get_mode(cursor):
cursor.execute('SELECT @@GLOBAL.sql_mode')
result = cursor.fetchone()
mode_str = result[0]
if 'ANSI' in mode_str:
mode = 'ANSI'
else:
mode = 'NOTANSI'
return mode
def user_exists(cursor, user, host, host_all):
if host_all:
cursor.execute("SELECT count(*) FROM mysql.user WHERE user = %s", ([user]))
else:
cursor.execute("SELECT count(*) FROM mysql.user WHERE user = %s AND host = %s", (user, host))
count = cursor.fetchone()
return count[0] > 0
def user_add(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string, new_priv, check_mode):
# we cannot create users without a proper hostname
if host_all:
return False
if check_mode:
return True
if password and encrypted:
cursor.execute("CREATE USER %s@%s IDENTIFIED BY PASSWORD %s", (user, host, password))
elif password and not encrypted:
cursor.execute("CREATE USER %s@%s IDENTIFIED BY %s", (user, host, password))
elif plugin and plugin_hash_string:
cursor.execute("CREATE USER %s@%s IDENTIFIED WITH %s AS %s", (user, host, plugin, plugin_hash_string))
elif plugin and plugin_auth_string:
cursor.execute("CREATE USER %s@%s IDENTIFIED WITH %s BY %s", (user, host, plugin, plugin_auth_string))
elif plugin:
cursor.execute("CREATE USER %s@%s IDENTIFIED WITH %s", (user, host, plugin))
else:
cursor.execute("CREATE USER %s@%s", (user, host))
if new_priv is not None:
for db_table, priv in iteritems(new_priv):
privileges_grant(cursor, user, host, db_table, priv)
return True
def is_hash(password):
ishash = False
if len(password) == 41 and password[0] == '*':
if frozenset(password[1:]).issubset(string.hexdigits):
ishash = True
return ishash
def user_mod(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string, new_priv, append_privs, module):
changed = False
msg = "User unchanged"
grant_option = False
if host_all:
hostnames = user_get_hostnames(cursor, [user])
else:
hostnames = [host]
for host in hostnames:
# Handle clear text and hashed passwords.
if bool(password):
# Determine what user management method server uses
old_user_mgmt = use_old_user_mgmt(cursor)
# Get a list of valid columns in mysql.user table to check if Password and/or authentication_string exist
cursor.execute("""
SELECT COLUMN_NAME FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME IN ('Password', 'authentication_string')
ORDER BY COLUMN_NAME DESC LIMIT 1
""")
colA = cursor.fetchone()
cursor.execute("""
SELECT COLUMN_NAME FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME = 'user' AND COLUMN_NAME IN ('Password', 'authentication_string')
ORDER BY COLUMN_NAME ASC LIMIT 1
""")
colB = cursor.fetchone()
# Select hash from either Password or authentication_string, depending which one exists and/or is filled
cursor.execute("""
SELECT COALESCE(
CASE WHEN %s = '' THEN NULL ELSE %s END,
CASE WHEN %s = '' THEN NULL ELSE %s END
)
FROM mysql.user WHERE user = %%s AND host = %%s
""" % (colA[0], colA[0], colB[0], colB[0]), (user, host))
current_pass_hash = cursor.fetchone()[0]
if isinstance(current_pass_hash, bytes):
current_pass_hash = current_pass_hash.decode('ascii')
if encrypted:
encrypted_password = password
if not is_hash(encrypted_password):
module.fail_json(msg="encrypted was specified however it does not appear to be a valid hash expecting: *SHA1(SHA1(your_password))")
else:
if old_user_mgmt:
cursor.execute("SELECT PASSWORD(%s)", (password,))
else:
cursor.execute("SELECT CONCAT('*', UCASE(SHA1(UNHEX(SHA1(%s)))))", (password,))
encrypted_password = cursor.fetchone()[0]
if current_pass_hash != encrypted_password:
msg = "Password updated"
if module.check_mode:
return (True, msg)
if old_user_mgmt:
cursor.execute("SET PASSWORD FOR %s@%s = %s", (user, host, encrypted_password))
msg = "Password updated (old style)"
else:
try:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH mysql_native_password AS %s", (user, host, encrypted_password))
msg = "Password updated (new style)"
except (mysql_driver.Error) as e:
# https://stackoverflow.com/questions/51600000/authentication-string-of-root-user-on-mysql
# Replacing empty root password with new authentication mechanisms fails with error 1396
if e.args[0] == 1396:
cursor.execute(
"UPDATE user SET plugin = %s, authentication_string = %s, Password = '' WHERE User = %s AND Host = %s",
('mysql_native_password', encrypted_password, user, host)
)
cursor.execute("FLUSH PRIVILEGES")
msg = "Password forced update"
else:
raise e
changed = True
# Handle plugin authentication
if plugin:
cursor.execute("SELECT plugin, authentication_string FROM mysql.user "
"WHERE user = %s AND host = %s", (user, host))
current_plugin = cursor.fetchone()
update = False
if current_plugin[0] != plugin:
update = True
if plugin_hash_string and current_plugin[1] != plugin_hash_string:
update = True
if plugin_auth_string and current_plugin[1] != plugin_auth_string:
# this case can cause more updates than expected,
# as plugin can hash auth_string in any way it wants
# and there's no way to figure it out for
# a check, so I prefer to update more often than never
update = True
if update:
if plugin_hash_string:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH %s AS %s", (user, host, plugin, plugin_hash_string))
elif plugin_auth_string:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH %s BY %s", (user, host, plugin, plugin_auth_string))
else:
cursor.execute("ALTER USER %s@%s IDENTIFIED WITH %s", (user, host, plugin))
changed = True
# Handle privileges
if new_priv is not None:
curr_priv = privileges_get(cursor, user, host)
# If the user has privileges on a db.table that doesn't appear at all in
# the new specification, then revoke all privileges on it.
for db_table, priv in iteritems(curr_priv):
# If the user has the GRANT OPTION on a db.table, revoke it first.
if "GRANT" in priv:
grant_option = True
if db_table not in new_priv:
if user != "root" and "PROXY" not in priv and not append_privs:
msg = "Privileges updated"
if module.check_mode:
return (True, msg)
privileges_revoke(cursor, user, host, db_table, priv, grant_option)
changed = True
# If the user doesn't currently have any privileges on a db.table, then
# we can perform a straight grant operation.
for db_table, priv in iteritems(new_priv):
if db_table not in curr_priv:
msg = "New privileges granted"
if module.check_mode:
return (True, msg)
privileges_grant(cursor, user, host, db_table, priv)
changed = True
# If the db.table specification exists in both the user's current privileges
# and in the new privileges, then we need to see if there's a difference.
db_table_intersect = set(new_priv.keys()) & set(curr_priv.keys())
for db_table in db_table_intersect:
priv_diff = set(new_priv[db_table]) ^ set(curr_priv[db_table])
if len(priv_diff) > 0:
msg = "Privileges updated"
if module.check_mode:
return (True, msg)
if not append_privs:
privileges_revoke(cursor, user, host, db_table, curr_priv[db_table], grant_option)
privileges_grant(cursor, user, host, db_table, new_priv[db_table])
changed = True
return (changed, msg)
def user_delete(cursor, user, host, host_all, check_mode):
if check_mode:
return True
if host_all:
hostnames = user_get_hostnames(cursor, [user])
for hostname in hostnames:
cursor.execute("DROP USER %s@%s", (user, hostname))
else:
cursor.execute("DROP USER %s@%s", (user, host))
return True
def user_get_hostnames(cursor, user):
cursor.execute("SELECT Host FROM mysql.user WHERE user = %s", user)
hostnames_raw = cursor.fetchall()
hostnames = []
for hostname_raw in hostnames_raw:
hostnames.append(hostname_raw[0])
return hostnames
def privileges_get(cursor, user, host):
""" MySQL doesn't have a better method of getting privileges aside from the
SHOW GRANTS query syntax, which requires us to then parse the returned string.
Here's an example of the string that is returned from MySQL:
GRANT USAGE ON *.* TO 'user'@'localhost' IDENTIFIED BY 'pass';
This function makes the query and returns a dictionary containing the results.
The dictionary format is the same as that returned by privileges_unpack() below.
"""
output = {}
cursor.execute("SHOW GRANTS FOR %s@%s", (user, host))
grants = cursor.fetchall()
def pick(x):
if x == 'ALL PRIVILEGES':
return 'ALL'
else:
return x
for grant in grants:
res = re.match("""GRANT (.+) ON (.+) TO (['`"]).*\\3@(['`"]).*\\4( IDENTIFIED BY PASSWORD (['`"]).+\\6)? ?(.*)""", grant[0])
if res is None:
raise InvalidPrivsError('unable to parse the MySQL grant string: %s' % grant[0])
privileges = res.group(1).split(", ")
privileges = [pick(x) for x in privileges]
if "WITH GRANT OPTION" in res.group(7):
privileges.append('GRANT')
if "REQUIRE SSL" in res.group(7):
privileges.append('REQUIRESSL')
db = res.group(2)
output[db] = privileges
return output
def privileges_unpack(priv, mode):
""" Take a privileges string, typically passed as a parameter, and unserialize
it into a dictionary, the same format as privileges_get() above. We have this
custom format to avoid using YAML/JSON strings inside YAML playbooks. Example
of a privileges string:
mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanother.*:ALL
The privilege USAGE stands for no privileges, so we add that in on *.* if it's
not specified in the string, as MySQL will always provide this by default.
"""
if mode == 'ANSI':
quote = '"'
else:
quote = '`'
output = {}
privs = []
for item in priv.strip().split('/'):
pieces = item.strip().rsplit(':', 1)
dbpriv = pieces[0].rsplit(".", 1)
# Check for FUNCTION or PROCEDURE object types
parts = dbpriv[0].split(" ", 1)
object_type = ''
if len(parts) > 1 and (parts[0] == 'FUNCTION' or parts[0] == 'PROCEDURE'):
object_type = parts[0] + ' '
dbpriv[0] = parts[1]
# Do not escape if privilege is for database or table, i.e.
# neither quote *. nor .*
for i, side in enumerate(dbpriv):
if side.strip('`') != '*':
dbpriv[i] = '%s%s%s' % (quote, side.strip('`'), quote)
pieces[0] = object_type + '.'.join(dbpriv)
if '(' in pieces[1]:
output[pieces[0]] = re.split(r',\s*(?=[^)]*(?:\(|$))', pieces[1].upper())
for i in output[pieces[0]]:
privs.append(re.sub(r'\s*\(.*\)', '', i))
else:
output[pieces[0]] = pieces[1].upper().split(',')
privs = output[pieces[0]]
new_privs = frozenset(privs)
if not new_privs.issubset(VALID_PRIVS):
raise InvalidPrivsError('Invalid privileges specified: %s' % new_privs.difference(VALID_PRIVS))
if '*.*' not in output:
output['*.*'] = ['USAGE']
# if we are only specifying something like REQUIRESSL and/or GRANT (=WITH GRANT OPTION) in *.*
# we still need to add USAGE as a privilege to avoid syntax errors
if 'REQUIRESSL' in priv and not set(output['*.*']).difference(set(['GRANT', 'REQUIRESSL'])):
output['*.*'].append('USAGE')
return output
def privileges_revoke(cursor, user, host, db_table, priv, grant_option):
# Escape '%' since mysql db.execute() uses a format string
db_table = db_table.replace('%', '%%')
if grant_option:
query = ["REVOKE GRANT OPTION ON %s" % db_table]
query.append("FROM %s@%s")
query = ' '.join(query)
cursor.execute(query, (user, host))
priv_string = ",".join([p for p in priv if p not in ('GRANT', 'REQUIRESSL')])
query = ["REVOKE %s ON %s" % (priv_string, db_table)]
query.append("FROM %s@%s")
query = ' '.join(query)
cursor.execute(query, (user, host))
def privileges_grant(cursor, user, host, db_table, priv):
# Escape '%' since mysql db.execute uses a format string and the
# specification of db and table often use a % (SQL wildcard)
db_table = db_table.replace('%', '%%')
priv_string = ",".join([p for p in priv if p not in ('GRANT', 'REQUIRESSL')])
query = ["GRANT %s ON %s" % (priv_string, db_table)]
query.append("TO %s@%s")
if 'REQUIRESSL' in priv:
query.append("REQUIRE SSL")
if 'GRANT' in priv:
query.append("WITH GRANT OPTION")
query = ' '.join(query)
cursor.execute(query, (user, host))
def convert_priv_dict_to_str(priv):
"""Converts privs dictionary to string of certain format.
Args:
priv (dict): Dict of privileges that needs to be converted to string.
Returns:
priv (str): String representation of input argument.
"""
priv_list = ['%s:%s' % (key, val) for key, val in iteritems(priv)]
return '/'.join(priv_list)
# ===========================================
# Module execution.
#
def main():
module = AnsibleModule(
argument_spec=dict(
login_user=dict(type='str'),
login_password=dict(type='str', no_log=True),
login_host=dict(type='str', default='localhost'),
login_port=dict(type='int', default=3306),
login_unix_socket=dict(type='str'),
user=dict(type='str', required=True, aliases=['name']),
password=dict(type='str', no_log=True),
encrypted=dict(type='bool', default=False),
host=dict(type='str', default='localhost'),
host_all=dict(type="bool", default=False),
state=dict(type='str', default='present', choices=['absent', 'present']),
priv=dict(type='raw'),
append_privs=dict(type='bool', default=False),
check_implicit_admin=dict(type='bool', default=False),
update_password=dict(type='str', default='always', choices=['always', 'on_create']),
connect_timeout=dict(type='int', default=30),
config_file=dict(type='path', default='~/.my.cnf'),
sql_log_bin=dict(type='bool', default=True),
client_cert=dict(type='path', aliases=['ssl_cert']),
client_key=dict(type='path', aliases=['ssl_key']),
ca_cert=dict(type='path', aliases=['ssl_ca']),
plugin=dict(default=None, type='str'),
plugin_hash_string=dict(default=None, type='str'),
plugin_auth_string=dict(default=None, type='str'),
),
supports_check_mode=True,
)
login_user = module.params["login_user"]
login_password = module.params["login_password"]
user = module.params["user"]
password = module.params["password"]
encrypted = module.boolean(module.params["encrypted"])
host = module.params["host"].lower()
host_all = module.params["host_all"]
state = module.params["state"]
priv = module.params["priv"]
check_implicit_admin = module.params['check_implicit_admin']
connect_timeout = module.params['connect_timeout']
config_file = module.params['config_file']
append_privs = module.boolean(module.params["append_privs"])
update_password = module.params['update_password']
ssl_cert = module.params["client_cert"]
ssl_key = module.params["client_key"]
ssl_ca = module.params["ca_cert"]
db = ''
sql_log_bin = module.params["sql_log_bin"]
plugin = module.params["plugin"]
plugin_hash_string = module.params["plugin_hash_string"]
plugin_auth_string = module.params["plugin_auth_string"]
if priv and not (isinstance(priv, str) or isinstance(priv, dict)):
module.fail_json(msg="priv parameter must be str or dict but %s was passed" % type(priv))
if priv and isinstance(priv, dict):
priv = convert_priv_dict_to_str(priv)
if mysql_driver is None:
module.fail_json(msg=mysql_driver_fail_msg)
cursor = None
try:
if check_implicit_admin:
try:
cursor, db_conn = mysql_connect(module, 'root', '', config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout)
except Exception:
pass
if not cursor:
cursor, db_conn = mysql_connect(module, login_user, login_password, config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout)
except Exception as e:
module.fail_json(msg="unable to connect to database, check login_user and login_password are correct or %s has the credentials. "
"Exception message: %s" % (config_file, to_native(e)))
if not sql_log_bin:
cursor.execute("SET SQL_LOG_BIN=0;")
if priv is not None:
try:
mode = get_mode(cursor)
except Exception as e:
module.fail_json(msg=to_native(e))
try:
priv = privileges_unpack(priv, mode)
except Exception as e:
module.fail_json(msg="invalid privileges string: %s" % to_native(e))
if state == "present":
if user_exists(cursor, user, host, host_all):
try:
if update_password == 'always':
changed, msg = user_mod(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string,
priv, append_privs, module)
else:
changed, msg = user_mod(cursor, user, host, host_all, None, encrypted,
plugin, plugin_hash_string, plugin_auth_string,
priv, append_privs, module)
except (SQLParseError, InvalidPrivsError, mysql_driver.Error) as e:
module.fail_json(msg=to_native(e))
else:
if host_all:
module.fail_json(msg="host_all parameter cannot be used when adding a user")
try:
changed = user_add(cursor, user, host, host_all, password, encrypted,
plugin, plugin_hash_string, plugin_auth_string,
priv, module.check_mode)
if changed:
msg = "User added"
except (SQLParseError, InvalidPrivsError, mysql_driver.Error) as e:
module.fail_json(msg=to_native(e))
elif state == "absent":
if user_exists(cursor, user, host, host_all):
changed = user_delete(cursor, user, host, host_all, module.check_mode)
msg = "User deleted"
else:
changed = False
msg = "User doesn't exist"
module.exit_json(changed=changed, user=user, msg=msg)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,077 |
Package (or pacman) break: "IndexError: list index out of range" on ansible devel
|
##### SUMMARY
When using Ansible devel (`pip install git+https://github.com/ansible/ansible.git@devel`) to install software using `package` (likely passed to `pacman` module) and looping over a list of packages, an error is display
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
package
pacman
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
```
##### CONFIGURATION
```paste below
(empty)
```
##### OS / ENVIRONMENT
controler: ubuntu (on travis)
managed node: Archlinux in a container
##### STEPS TO REPRODUCE
Have a look at [the build](https://travis-ci.org/robertdebock/ansible-role-at/jobs/592714211#L834) and [the code](- name: install bootstrap_facts_packages)
##### EXPECTED RESULTS
No errors.
##### ACTUAL RESULTS
```paste below
835 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IndexError: list index out of range
836 failed: [at-archlinux] (item=python3) => changed=false
837 ansible_loop_var: item
838 item: python3
839 module_stderr: |-
840 Traceback (most recent call last):
841 File "/root/.ansible/tmp/ansible-tmp-1570043024.2295542-61858662337132/AnsiballZ_pacman.py", line 102, in <module>
842 _ansiballz_main()
843 File "/root/.ansible/tmp/ansible-tmp-1570043024.2295542-61858662337132/AnsiballZ_pacman.py", line 94, in _ansiballz_main
844 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
845 File "/root/.ansible/tmp/ansible-tmp-1570043024.2295542-61858662337132/AnsiballZ_pacman.py", line 40, in invoke_module
846 runpy.run_module(mod_name='ansible.modules.packaging.os.pacman', init_globals=None, run_name='__main__', alter_sys=False)
847 File "/usr/lib/python3.7/runpy.py", line 208, in run_module
848 return _run_code(code, {}, init_globals, run_name, mod_spec)
849 File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
850 exec(code, run_globals)
851 File "/tmp/ansible_pacman_payload_0g8c68rr/ansible_pacman_payload.zip/ansible/modules/packaging/os/pacman.py", line 500, in <module>
852 File "/tmp/ansible_pacman_payload_0g8c68rr/ansible_pacman_payload.zip/ansible/modules/packaging/os/pacman.py", line 492, in main
853 File "/tmp/ansible_pacman_payload_0g8c68rr/ansible_pacman_payload.zip/ansible/modules/packaging/os/pacman.py", line 341, in install_packages
854 IndexError: list index out of range
855 module_stdout: ''
856 msg: |-
857 MODULE FAILURE
858 See stdout/stderr for the exact error
859 rc: 1
```
|
https://github.com/ansible/ansible/issues/63077
|
https://github.com/ansible/ansible/pull/65750
|
3baea92ec94c55b04d2986096ddf49440d60eca3
|
14b1febf64f03d5d1b7b02acb7749055accd12fc
| 2019-10-03T04:34:18Z |
python
| 2020-02-01T13:37:27Z |
changelogs/fragments/65750-pacman.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,077 |
Package (or pacman) break: "IndexError: list index out of range" on ansible devel
|
##### SUMMARY
When using Ansible devel (`pip install git+https://github.com/ansible/ansible.git@devel`) to install software using `package` (likely passed to `pacman` module) and looping over a list of packages, an error is display
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
package
pacman
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
```
##### CONFIGURATION
```paste below
(empty)
```
##### OS / ENVIRONMENT
controler: ubuntu (on travis)
managed node: Archlinux in a container
##### STEPS TO REPRODUCE
Have a look at [the build](https://travis-ci.org/robertdebock/ansible-role-at/jobs/592714211#L834) and [the code](- name: install bootstrap_facts_packages)
##### EXPECTED RESULTS
No errors.
##### ACTUAL RESULTS
```paste below
835 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IndexError: list index out of range
836 failed: [at-archlinux] (item=python3) => changed=false
837 ansible_loop_var: item
838 item: python3
839 module_stderr: |-
840 Traceback (most recent call last):
841 File "/root/.ansible/tmp/ansible-tmp-1570043024.2295542-61858662337132/AnsiballZ_pacman.py", line 102, in <module>
842 _ansiballz_main()
843 File "/root/.ansible/tmp/ansible-tmp-1570043024.2295542-61858662337132/AnsiballZ_pacman.py", line 94, in _ansiballz_main
844 invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
845 File "/root/.ansible/tmp/ansible-tmp-1570043024.2295542-61858662337132/AnsiballZ_pacman.py", line 40, in invoke_module
846 runpy.run_module(mod_name='ansible.modules.packaging.os.pacman', init_globals=None, run_name='__main__', alter_sys=False)
847 File "/usr/lib/python3.7/runpy.py", line 208, in run_module
848 return _run_code(code, {}, init_globals, run_name, mod_spec)
849 File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
850 exec(code, run_globals)
851 File "/tmp/ansible_pacman_payload_0g8c68rr/ansible_pacman_payload.zip/ansible/modules/packaging/os/pacman.py", line 500, in <module>
852 File "/tmp/ansible_pacman_payload_0g8c68rr/ansible_pacman_payload.zip/ansible/modules/packaging/os/pacman.py", line 492, in main
853 File "/tmp/ansible_pacman_payload_0g8c68rr/ansible_pacman_payload.zip/ansible/modules/packaging/os/pacman.py", line 341, in install_packages
854 IndexError: list index out of range
855 module_stdout: ''
856 msg: |-
857 MODULE FAILURE
858 See stdout/stderr for the exact error
859 rc: 1
```
|
https://github.com/ansible/ansible/issues/63077
|
https://github.com/ansible/ansible/pull/65750
|
3baea92ec94c55b04d2986096ddf49440d60eca3
|
14b1febf64f03d5d1b7b02acb7749055accd12fc
| 2019-10-03T04:34:18Z |
python
| 2020-02-01T13:37:27Z |
lib/ansible/modules/packaging/os/pacman.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Afterburn <https://github.com/afterburn>
# Copyright: (c) 2013, Aaron Bull Schaefer <[email protected]>
# Copyright: (c) 2015, Indrajit Raychaudhuri <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: pacman
short_description: Manage packages with I(pacman)
description:
- Manage packages with the I(pacman) package manager, which is used by Arch Linux and its variants.
version_added: "1.0"
author:
- Indrajit Raychaudhuri (@indrajitr)
- Aaron Bull Schaefer (@elasticdog) <[email protected]>
- Maxime de Roucy (@tchernomax)
options:
name:
description:
- Name or list of names of the package(s) or file(s) to install, upgrade, or remove.
Can't be used in combination with C(upgrade).
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Desired state of the package.
default: present
choices: [ absent, latest, present ]
force:
description:
- When removing package, force remove package, without any checks.
Same as `extra_args="--nodeps --nodeps"`.
When update_cache, force redownload repo databases.
Same as `update_cache_extra_args="--refresh --refresh"`.
default: no
type: bool
version_added: "2.0"
extra_args:
description:
- Additional option to pass to pacman when enforcing C(state).
default:
version_added: "2.8"
update_cache:
description:
- Whether or not to refresh the master package lists.
- This can be run as part of a package installation or as a separate step.
default: no
type: bool
aliases: [ update-cache ]
update_cache_extra_args:
description:
- Additional option to pass to pacman when enforcing C(update_cache).
default:
version_added: "2.8"
upgrade:
description:
- Whether or not to upgrade the whole system.
Can't be used in combination with C(name).
default: no
type: bool
version_added: "2.0"
upgrade_extra_args:
description:
- Additional option to pass to pacman when enforcing C(upgrade).
default:
version_added: "2.8"
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
'''
RETURN = '''
packages:
description: a list of packages that have been changed
returned: when upgrade is set to yes
type: list
sample: [ package, other-package ]
'''
EXAMPLES = '''
- name: Install package foo from repo
pacman:
name: foo
state: present
- name: Install package bar from file
pacman:
name: ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Install package foo from repo and bar from file
pacman:
name:
- foo
- ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Upgrade package foo
pacman:
name: foo
state: latest
update_cache: yes
- name: Remove packages foo and bar
pacman:
name:
- foo
- bar
state: absent
- name: Recursively remove package baz
pacman:
name: baz
state: absent
extra_args: --recursive
- name: Run the equivalent of "pacman -Sy" as a separate step
pacman:
update_cache: yes
- name: Run the equivalent of "pacman -Su" as a separate step
pacman:
upgrade: yes
- name: Run the equivalent of "pacman -Syu" as a separate step
pacman:
update_cache: yes
upgrade: yes
- name: Run the equivalent of "pacman -Rdd", force remove package baz
pacman:
name: baz
state: absent
force: yes
'''
import re
from ansible.module_utils.basic import AnsibleModule
def get_version(pacman_output):
"""Take pacman -Qi or pacman -Si output and get the Version"""
lines = pacman_output.split('\n')
for line in lines:
if line.startswith('Version '):
return line.split(':')[1].strip()
return None
def get_name(module, pacman_output):
"""Take pacman -Qi or pacman -Si output and get the package name"""
lines = pacman_output.split('\n')
for line in lines:
if line.startswith('Name '):
return line.split(':')[1].strip()
module.fail_json(msg="get_name: fail to retrieve package name from pacman output")
def query_package(module, pacman_path, name, state="present"):
"""Query the package status in both the local system and the repository. Returns a boolean to indicate if the package is installed, a second
boolean to indicate if the package is up-to-date and a third boolean to indicate whether online information were available
"""
if state == "present":
lcmd = "%s --query --info %s" % (pacman_path, name)
lrc, lstdout, lstderr = module.run_command(lcmd, check_rc=False)
if lrc != 0:
# package is not installed locally
return False, False, False
else:
# a non-zero exit code doesn't always mean the package is installed
# for example, if the package name queried is "provided" by another package
installed_name = get_name(module, lstdout)
if installed_name != name:
return False, False, False
# get the version installed locally (if any)
lversion = get_version(lstdout)
rcmd = "%s --sync --info %s" % (pacman_path, name)
rrc, rstdout, rstderr = module.run_command(rcmd, check_rc=False)
# get the version in the repository
rversion = get_version(rstdout)
if rrc == 0:
# Return True to indicate that the package is installed locally, and the result of the version number comparison
# to determine if the package is up-to-date.
return True, (lversion == rversion), False
# package is installed but cannot fetch remote Version. Last True stands for the error
return True, True, True
def update_package_db(module, pacman_path):
if module.params['force']:
module.params["update_cache_extra_args"] += " --refresh --refresh"
cmd = "%s --sync --refresh %s" % (pacman_path, module.params["update_cache_extra_args"])
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
return True
else:
module.fail_json(msg="could not update package db")
def upgrade(module, pacman_path):
cmdupgrade = "%s --sync --sysupgrade --quiet --noconfirm %s" % (pacman_path, module.params["upgrade_extra_args"])
cmdneedrefresh = "%s --query --upgrades" % (pacman_path)
rc, stdout, stderr = module.run_command(cmdneedrefresh, check_rc=False)
data = stdout.split('\n')
data.remove('')
packages = []
diff = {
'before': '',
'after': '',
}
if rc == 0:
# Match lines of `pacman -Qu` output of the form:
# (package name) (before version-release) -> (after version-release)
# e.g., "ansible 2.7.1-1 -> 2.7.2-1"
regex = re.compile(r'([\w+\-.@]+) (\S+-\S+) -> (\S+-\S+)')
for p in data:
m = regex.search(p)
packages.append(m.group(1))
if module._diff:
diff['before'] += "%s-%s\n" % (m.group(1), m.group(2))
diff['after'] += "%s-%s\n" % (m.group(1), m.group(3))
if module.check_mode:
module.exit_json(changed=True, msg="%s package(s) would be upgraded" % (len(data)), packages=packages, diff=diff)
rc, stdout, stderr = module.run_command(cmdupgrade, check_rc=False)
if rc == 0:
module.exit_json(changed=True, msg='System upgraded', packages=packages, diff=diff)
else:
module.fail_json(msg="Could not upgrade")
else:
module.exit_json(changed=False, msg='Nothing to upgrade', packages=packages)
def remove_packages(module, pacman_path, packages):
data = []
diff = {
'before': '',
'after': '',
}
if module.params["force"]:
module.params["extra_args"] += " --nodeps --nodeps"
remove_c = 0
# Using a for loop in case of error, we can report the package that failed
for package in packages:
# Query the package first, to see if we even need to remove
installed, updated, unknown = query_package(module, pacman_path, package)
if not installed:
continue
cmd = "%s --remove --noconfirm --noprogressbar %s %s" % (pacman_path, module.params["extra_args"], package)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to remove %s" % (package))
if module._diff:
d = stdout.split('\n')[2].split(' ')[2:]
for i, pkg in enumerate(d):
d[i] = re.sub('-[0-9].*$', '', d[i].split('/')[-1])
diff['before'] += "%s\n" % pkg
data.append('\n'.join(d))
remove_c += 1
if remove_c > 0:
module.exit_json(changed=True, msg="removed %s package(s)" % remove_c, diff=diff)
module.exit_json(changed=False, msg="package(s) already absent")
def install_packages(module, pacman_path, state, packages, package_files):
install_c = 0
package_err = []
message = ""
data = []
diff = {
'before': '',
'after': '',
}
to_install_repos = []
to_install_files = []
for i, package in enumerate(packages):
# if the package is installed and state == present or state == latest and is up-to-date then skip
installed, updated, latestError = query_package(module, pacman_path, package)
if latestError and state == 'latest':
package_err.append(package)
if installed and (state == 'present' or (state == 'latest' and updated)):
continue
if package_files[i]:
to_install_files.append(package_files[i])
else:
to_install_repos.append(package)
if to_install_repos:
cmd = "%s --sync --noconfirm --noprogressbar --needed %s %s" % (pacman_path, module.params["extra_args"], " ".join(to_install_repos))
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_repos), stderr))
data = stdout.split('\n')[3].split(' ')[2:]
data = [i for i in data if i != '']
for i, pkg in enumerate(data):
data[i] = re.sub('-[0-9].*$', '', data[i].split('/')[-1])
if module._diff:
diff['after'] += "%s\n" % pkg
install_c += len(to_install_repos)
if to_install_files:
cmd = "%s --upgrade --noconfirm --noprogressbar --needed %s %s" % (pacman_path, module.params["extra_args"], " ".join(to_install_files))
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_files), stderr))
data = stdout.split('\n')[3].split(' ')[2:]
data = [i for i in data if i != '']
for i, pkg in enumerate(data):
data[i] = re.sub('-[0-9].*$', '', data[i].split('/')[-1])
if module._diff:
diff['after'] += "%s\n" % pkg
install_c += len(to_install_files)
if state == 'latest' and len(package_err) > 0:
message = "But could not ensure 'latest' state for %s package(s) as remote version could not be fetched." % (package_err)
if install_c > 0:
module.exit_json(changed=True, msg="installed %s package(s). %s" % (install_c, message), diff=diff)
module.exit_json(changed=False, msg="package(s) already installed. %s" % (message), diff=diff)
def check_packages(module, pacman_path, packages, state):
would_be_changed = []
diff = {
'before': '',
'after': '',
'before_header': '',
'after_header': ''
}
for package in packages:
installed, updated, unknown = query_package(module, pacman_path, package)
if ((state in ["present", "latest"] and not installed) or
(state == "absent" and installed) or
(state == "latest" and not updated)):
would_be_changed.append(package)
if would_be_changed:
if state == "absent":
state = "removed"
if module._diff and (state == 'removed'):
diff['before_header'] = 'removed'
diff['before'] = '\n'.join(would_be_changed) + '\n'
elif module._diff and ((state == 'present') or (state == 'latest')):
diff['after_header'] = 'installed'
diff['after'] = '\n'.join(would_be_changed) + '\n'
module.exit_json(changed=True, msg="%s package(s) would be %s" % (
len(would_be_changed), state), diff=diff)
else:
module.exit_json(changed=False, msg="package(s) already %s" % state, diff=diff)
def expand_package_groups(module, pacman_path, pkgs):
expanded = []
for pkg in pkgs:
if pkg: # avoid empty strings
cmd = "%s --sync --groups --quiet %s" % (pacman_path, pkg)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
# A group was found matching the name, so expand it
for name in stdout.split('\n'):
name = name.strip()
if name:
expanded.append(name)
else:
expanded.append(pkg)
return expanded
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='list', elements='str', aliases=['pkg', 'package']),
state=dict(type='str', default='present', choices=['present', 'installed', 'latest', 'absent', 'removed']),
force=dict(type='bool', default=False),
extra_args=dict(type='str', default=''),
upgrade=dict(type='bool', default=False),
upgrade_extra_args=dict(type='str', default=''),
update_cache=dict(type='bool', default=False, aliases=['update-cache']),
update_cache_extra_args=dict(type='str', default=''),
),
required_one_of=[['name', 'update_cache', 'upgrade']],
mutually_exclusive=[['name', 'upgrade']],
supports_check_mode=True,
)
pacman_path = module.get_bin_path('pacman', True)
module.run_command_environ_update = dict(LC_ALL='C')
p = module.params
# normalize the state parameter
if p['state'] in ['present', 'installed']:
p['state'] = 'present'
elif p['state'] in ['absent', 'removed']:
p['state'] = 'absent'
if p["update_cache"] and not module.check_mode:
update_package_db(module, pacman_path)
if not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Updated the package master lists')
if p['update_cache'] and module.check_mode and not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Would have updated the package cache')
if p['upgrade']:
upgrade(module, pacman_path)
if p['name']:
pkgs = expand_package_groups(module, pacman_path, p['name'])
pkg_files = []
for i, pkg in enumerate(pkgs):
if not pkg: # avoid empty strings
continue
elif re.match(r".*\.pkg\.tar(\.(gz|bz2|xz|lrz|lzo|Z))?$", pkg):
# The package given is a filename, extract the raw pkg name from
# it and store the filename
pkg_files.append(pkg)
pkgs[i] = re.sub(r'-[0-9].*$', '', pkgs[i].split('/')[-1])
else:
pkg_files.append(None)
if module.check_mode:
check_packages(module, pacman_path, pkgs, p['state'])
if p['state'] in ['present', 'latest']:
install_packages(module, pacman_path, p['state'], pkgs, pkg_files)
elif p['state'] == 'absent':
remove_packages(module, pacman_path, pkgs)
else:
module.exit_json(changed=False, msg="No package specified to work on.")
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,035 |
openssl_publickey always fails with name 'crypto' is not defined
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`openssl_publickey` calls `module_utils.crypto.get_fingerprint()` and cannot tell it what backend to use, and it calls `module_utils.crypto.load_privatekey()` without backend anyways, then default to `pyopenssl` backend and fails.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`openssl_publickey`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /root/ansible_workspace/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
```
It should affects devel.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
irrelevant
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
I'm running Manjaro 18.1.5.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: Reproduction
hosts: localhost
tasks:
- openssl_privatekey:
path: /tmp/test.key
- openssl_publickey:
path: /tmp/test.pub
privatekey_path: /tmp/test.key
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The `openssl_publickey` task should not fail and `/tmp/test.pub` should be created.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.4
config file = /root/ansible_workspace/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
Using /root/ansible_workspace/ansible.cfg as config file
host_list declined parsing /root/ansible_workspace/managed_nodes.yml as it did not pass its verify_file() method
script declined parsing /root/ansible_workspace/managed_nodes.yml as it did not pass its verify_file() method
Skipping empty key (hosts) in group (all)
Skipping empty key (hosts) in group (critical_infra_servers)
Skipping empty key (hosts) in group (application_servers)
Skipping empty key (hosts) in group (database_servers)
Parsed /root/ansible_workspace/managed_nodes.yml inventory source with yaml plugin
PLAYBOOK: testground.yml ******************************************************************************
1 plays in testground.yml
PLAY [Reproduction] ***********************************************************************************
META: ran handlers
TASK [openssl_privatekey] *****************************************************************************
task path: /root/ansible_workspace/testground.yml:6
Sunday 02 February 2020 23:49:44 +0800 (0:00:00.018) 0:00:00.018 *******
Using module file /usr/lib/python3.8/site-packages/ansible/modules/crypto/openssl_privatekey.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
changed: [localhost] => {
"changed": true,
"filename": "/tmp/test.key",
"fingerprint": {
"blake2b": "06:c9:0a:b4:12:e7:13:cc:aa:6a:9a:22:00:6b:c8:48:06:a1:4d:5d:df:0e:ed:10:d5:3a:23:7f:4b:6e:45:b7:1e:b0:b1:13:f6:95:46:6f:67:54:c2:07:fd:10:f1:7c:8f:84:91:96:6b:5a:44:cf:2e:e1:c3:36:78:b4:b1:db",
"blake2s": "f9:b2:ab:8f:32:4f:5d:91:2b:89:dc:da:07:89:b8:41:cd:59:5f:ac:1d:3b:e3:d8:42:5b:ee:3d:a1:87:84:4b",
"md5": "3d:b2:37:ff:11:0d:26:c3:35:9e:3a:67:66:5b:77:ac",
"sha1": "11:1f:aa:0b:4b:58:44:2a:85:e6:29:10:96:6c:44:7f:f4:f9:a2:4b",
"sha224": "a4:60:9f:fb:cd:e9:7e:b4:bb:54:84:03:70:d6:0c:39:cb:9d:cb:77:8a:c8:b7:fe:97:f7:ad:11",
"sha256": "b4:92:9a:ac:a6:84:5b:a6:31:e4:11:fe:5c:29:09:76:4c:7f:29:34:fa:a2:89:c5:25:d3:08:69:07:54:2d:69",
"sha384": "51:41:bd:08:d5:fa:2d:c1:3f:d8:69:e8:b9:36:fc:9e:68:f0:92:b3:c6:a4:f2:f1:9f:80:f4:66:e8:ad:47:f5:8d:57:ca:b4:71:b5:6d:ed:8c:f7:01:11:a6:68:27:96",
"sha3_224": "5d:73:c9:b6:80:a4:6f:0a:60:6a:8a:c9:b8:af:9e:4f:18:ca:cb:85:35:44:b4:1d:65:a3:51:4f",
"sha3_256": "a6:ac:fb:5c:8a:a8:b9:1c:c0:99:05:15:20:03:9f:ce:a8:42:03:80:75:50:aa:5d:4c:8e:0e:0e:a4:d0:6d:27",
"sha3_384": "9f:46:2a:b7:6c:14:68:37:ad:c0:12:ae:9c:a9:6a:ab:34:86:06:02:15:a5:10:57:9f:2b:78:b5:69:af:d9:f9:81:33:d2:67:58:08:00:84:8b:50:9f:76:45:ab:51:e3",
"sha3_512": "b1:3c:df:1e:27:0c:b3:b0:55:3e:cd:42:d2:67:ce:58:02:39:ac:8d:38:11:bf:74:e6:0a:84:c1:fd:4c:a5:01:74:f1:5a:3d:4b:8c:7e:98:b7:6a:18:5a:e5:98:04:a7:b6:5d:9a:4e:93:88:85:80:4f:9b:8c:35:b8:55:f6:c6",
"sha512": "cd:3a:f3:ed:dd:86:28:75:2a:8a:c5:65:88:f3:b0:8b:c5:c3:d3:b9:3d:a5:5d:78:1a:04:cb:dd:0b:58:a5:4d:9a:02:37:a4:e5:4b:ce:f3:4f:54:11:98:93:f3:dd:67:ac:ef:04:06:17:2d:a5:08:09:1a:19:12:cc:1f:56:63",
"shake_128": "f7:de:e4:52:c2:65:c0:e6:c8:7a:f9:35:d5:63:94:59:1d:c1:c5:52:b1:3e:8a:2a:dc:5a:2a:57:df:cc:32:d0",
"shake_256": "ba:d5:37:e1:78:23:f3:39:ed:be:e0:d8:f3:c1:75:a5:28:fe:b2:e1:2b:17:1d:8c:7f:04:2c:0a:2a:5e:ae:c4"
},
"invocation": {
"module_args": {
"attributes": null,
"backup": false,
"cipher": null,
"content": null,
"curve": null,
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": false,
"group": null,
"mode": "0600",
"owner": null,
"passphrase": null,
"path": "/tmp/test.key",
"regexp": null,
"remote_src": null,
"select_crypto_backend": "auto",
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"size": 4096,
"src": null,
"state": "present",
"type": "RSA",
"unsafe_writes": null
}
},
"size": 4096,
"type": "RSA"
}
TASK [openssl_publickey] ******************************************************************************
task path: /root/ansible_workspace/testground.yml:9
Sunday 02 February 2020 23:49:45 +0800 (0:00:00.824) 0:00:00.843 *******
Using module file /usr/lib/python3.8/site-packages/ansible/modules/crypto/openssl_publickey.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 209, in load_privatekey
NameError: name 'crypto' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.8/runpy.py", line 206, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 432, in <module>
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 416, in main
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 266, in generate
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 171, in get_fingerprint
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 212, in load_privatekey
NameError: name 'crypto' is not defined
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "*snip*",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ********************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Sunday 02 February 2020 23:49:45 +0800 (0:00:00.270) 0:00:01.113 *******
===============================================================================
openssl_privatekey ----------------------------------------------------------------------------- 0.82s
/root/ansible_workspace/testground.yml:6 -------------------------------------------------------------
openssl_publickey ------------------------------------------------------------------------------ 0.27s
/root/ansible_workspace/testground.yml:9 -------------------------------------------------------------
```
|
https://github.com/ansible/ansible/issues/67035
|
https://github.com/ansible/ansible/pull/67036
|
b1a8bded3fe769244b16525dadcd19c2007b80c7
|
a0e5e2e4c597c8cf0fdd39c2df45fe33fd38eedb
| 2020-02-02T15:55:25Z |
python
| 2020-02-03T05:18:19Z |
changelogs/fragments/67036-openssl_publickey-backend.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,035 |
openssl_publickey always fails with name 'crypto' is not defined
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`openssl_publickey` calls `module_utils.crypto.get_fingerprint()` and cannot tell it what backend to use, and it calls `module_utils.crypto.load_privatekey()` without backend anyways, then default to `pyopenssl` backend and fails.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`openssl_publickey`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /root/ansible_workspace/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
```
It should affects devel.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
irrelevant
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
I'm running Manjaro 18.1.5.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: Reproduction
hosts: localhost
tasks:
- openssl_privatekey:
path: /tmp/test.key
- openssl_publickey:
path: /tmp/test.pub
privatekey_path: /tmp/test.key
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The `openssl_publickey` task should not fail and `/tmp/test.pub` should be created.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.4
config file = /root/ansible_workspace/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
Using /root/ansible_workspace/ansible.cfg as config file
host_list declined parsing /root/ansible_workspace/managed_nodes.yml as it did not pass its verify_file() method
script declined parsing /root/ansible_workspace/managed_nodes.yml as it did not pass its verify_file() method
Skipping empty key (hosts) in group (all)
Skipping empty key (hosts) in group (critical_infra_servers)
Skipping empty key (hosts) in group (application_servers)
Skipping empty key (hosts) in group (database_servers)
Parsed /root/ansible_workspace/managed_nodes.yml inventory source with yaml plugin
PLAYBOOK: testground.yml ******************************************************************************
1 plays in testground.yml
PLAY [Reproduction] ***********************************************************************************
META: ran handlers
TASK [openssl_privatekey] *****************************************************************************
task path: /root/ansible_workspace/testground.yml:6
Sunday 02 February 2020 23:49:44 +0800 (0:00:00.018) 0:00:00.018 *******
Using module file /usr/lib/python3.8/site-packages/ansible/modules/crypto/openssl_privatekey.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
changed: [localhost] => {
"changed": true,
"filename": "/tmp/test.key",
"fingerprint": {
"blake2b": "06:c9:0a:b4:12:e7:13:cc:aa:6a:9a:22:00:6b:c8:48:06:a1:4d:5d:df:0e:ed:10:d5:3a:23:7f:4b:6e:45:b7:1e:b0:b1:13:f6:95:46:6f:67:54:c2:07:fd:10:f1:7c:8f:84:91:96:6b:5a:44:cf:2e:e1:c3:36:78:b4:b1:db",
"blake2s": "f9:b2:ab:8f:32:4f:5d:91:2b:89:dc:da:07:89:b8:41:cd:59:5f:ac:1d:3b:e3:d8:42:5b:ee:3d:a1:87:84:4b",
"md5": "3d:b2:37:ff:11:0d:26:c3:35:9e:3a:67:66:5b:77:ac",
"sha1": "11:1f:aa:0b:4b:58:44:2a:85:e6:29:10:96:6c:44:7f:f4:f9:a2:4b",
"sha224": "a4:60:9f:fb:cd:e9:7e:b4:bb:54:84:03:70:d6:0c:39:cb:9d:cb:77:8a:c8:b7:fe:97:f7:ad:11",
"sha256": "b4:92:9a:ac:a6:84:5b:a6:31:e4:11:fe:5c:29:09:76:4c:7f:29:34:fa:a2:89:c5:25:d3:08:69:07:54:2d:69",
"sha384": "51:41:bd:08:d5:fa:2d:c1:3f:d8:69:e8:b9:36:fc:9e:68:f0:92:b3:c6:a4:f2:f1:9f:80:f4:66:e8:ad:47:f5:8d:57:ca:b4:71:b5:6d:ed:8c:f7:01:11:a6:68:27:96",
"sha3_224": "5d:73:c9:b6:80:a4:6f:0a:60:6a:8a:c9:b8:af:9e:4f:18:ca:cb:85:35:44:b4:1d:65:a3:51:4f",
"sha3_256": "a6:ac:fb:5c:8a:a8:b9:1c:c0:99:05:15:20:03:9f:ce:a8:42:03:80:75:50:aa:5d:4c:8e:0e:0e:a4:d0:6d:27",
"sha3_384": "9f:46:2a:b7:6c:14:68:37:ad:c0:12:ae:9c:a9:6a:ab:34:86:06:02:15:a5:10:57:9f:2b:78:b5:69:af:d9:f9:81:33:d2:67:58:08:00:84:8b:50:9f:76:45:ab:51:e3",
"sha3_512": "b1:3c:df:1e:27:0c:b3:b0:55:3e:cd:42:d2:67:ce:58:02:39:ac:8d:38:11:bf:74:e6:0a:84:c1:fd:4c:a5:01:74:f1:5a:3d:4b:8c:7e:98:b7:6a:18:5a:e5:98:04:a7:b6:5d:9a:4e:93:88:85:80:4f:9b:8c:35:b8:55:f6:c6",
"sha512": "cd:3a:f3:ed:dd:86:28:75:2a:8a:c5:65:88:f3:b0:8b:c5:c3:d3:b9:3d:a5:5d:78:1a:04:cb:dd:0b:58:a5:4d:9a:02:37:a4:e5:4b:ce:f3:4f:54:11:98:93:f3:dd:67:ac:ef:04:06:17:2d:a5:08:09:1a:19:12:cc:1f:56:63",
"shake_128": "f7:de:e4:52:c2:65:c0:e6:c8:7a:f9:35:d5:63:94:59:1d:c1:c5:52:b1:3e:8a:2a:dc:5a:2a:57:df:cc:32:d0",
"shake_256": "ba:d5:37:e1:78:23:f3:39:ed:be:e0:d8:f3:c1:75:a5:28:fe:b2:e1:2b:17:1d:8c:7f:04:2c:0a:2a:5e:ae:c4"
},
"invocation": {
"module_args": {
"attributes": null,
"backup": false,
"cipher": null,
"content": null,
"curve": null,
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": false,
"group": null,
"mode": "0600",
"owner": null,
"passphrase": null,
"path": "/tmp/test.key",
"regexp": null,
"remote_src": null,
"select_crypto_backend": "auto",
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"size": 4096,
"src": null,
"state": "present",
"type": "RSA",
"unsafe_writes": null
}
},
"size": 4096,
"type": "RSA"
}
TASK [openssl_publickey] ******************************************************************************
task path: /root/ansible_workspace/testground.yml:9
Sunday 02 February 2020 23:49:45 +0800 (0:00:00.824) 0:00:00.843 *******
Using module file /usr/lib/python3.8/site-packages/ansible/modules/crypto/openssl_publickey.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 209, in load_privatekey
NameError: name 'crypto' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.8/runpy.py", line 206, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 432, in <module>
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 416, in main
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 266, in generate
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 171, in get_fingerprint
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 212, in load_privatekey
NameError: name 'crypto' is not defined
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "*snip*",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ********************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Sunday 02 February 2020 23:49:45 +0800 (0:00:00.270) 0:00:01.113 *******
===============================================================================
openssl_privatekey ----------------------------------------------------------------------------- 0.82s
/root/ansible_workspace/testground.yml:6 -------------------------------------------------------------
openssl_publickey ------------------------------------------------------------------------------ 0.27s
/root/ansible_workspace/testground.yml:9 -------------------------------------------------------------
```
|
https://github.com/ansible/ansible/issues/67035
|
https://github.com/ansible/ansible/pull/67036
|
b1a8bded3fe769244b16525dadcd19c2007b80c7
|
a0e5e2e4c597c8cf0fdd39c2df45fe33fd38eedb
| 2020-02-02T15:55:25Z |
python
| 2020-02-03T05:18:19Z |
lib/ansible/module_utils/crypto.py
|
# -*- coding: utf-8 -*-
#
# (c) 2016, Yanis Guenane <[email protected]>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# ----------------------------------------------------------------------
# A clearly marked portion of this file is licensed under the BSD license
# Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# For more details, search for the function _obj2txt().
# ---------------------------------------------------------------------
# A clearly marked portion of this file is extracted from a project that
# is licensed under the Apache License 2.0
# Copyright (c) the OpenSSL contributors
# For more details, search for the function _OID_MAP.
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
from distutils.version import LooseVersion
try:
import OpenSSL
from OpenSSL import crypto
except ImportError:
# An error will be raised in the calling class to let the end
# user know that OpenSSL couldn't be found.
pass
try:
import cryptography
from cryptography import x509
from cryptography.hazmat.backends import default_backend as cryptography_backend
from cryptography.hazmat.primitives.serialization import load_pem_private_key
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
import ipaddress
# Older versions of cryptography (< 2.1) do not have __hash__ functions for
# general name objects (DNSName, IPAddress, ...), while providing overloaded
# equality and string representation operations. This makes it impossible to
# use them in hash-based data structures such as set or dict. Since we are
# actually doing that in openssl_certificate, and potentially in other code,
# we need to monkey-patch __hash__ for these classes to make sure our code
# works fine.
if LooseVersion(cryptography.__version__) < LooseVersion('2.1'):
# A very simply hash function which relies on the representation
# of an object to be implemented. This is the case since at least
# cryptography 1.0, see
# https://github.com/pyca/cryptography/commit/7a9abce4bff36c05d26d8d2680303a6f64a0e84f
def simple_hash(self):
return hash(repr(self))
# The hash functions for the following types were added for cryptography 2.1:
# https://github.com/pyca/cryptography/commit/fbfc36da2a4769045f2373b004ddf0aff906cf38
x509.DNSName.__hash__ = simple_hash
x509.DirectoryName.__hash__ = simple_hash
x509.GeneralName.__hash__ = simple_hash
x509.IPAddress.__hash__ = simple_hash
x509.OtherName.__hash__ = simple_hash
x509.RegisteredID.__hash__ = simple_hash
if LooseVersion(cryptography.__version__) < LooseVersion('1.2'):
# The hash functions for the following types were added for cryptography 1.2:
# https://github.com/pyca/cryptography/commit/b642deed88a8696e5f01ce6855ccf89985fc35d0
# https://github.com/pyca/cryptography/commit/d1b5681f6db2bde7a14625538bd7907b08dfb486
x509.RFC822Name.__hash__ = simple_hash
x509.UniformResourceIdentifier.__hash__ = simple_hash
# Test whether we have support for X25519, X448, Ed25519 and/or Ed448
try:
import cryptography.hazmat.primitives.asymmetric.x25519
CRYPTOGRAPHY_HAS_X25519 = True
try:
cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.private_bytes
CRYPTOGRAPHY_HAS_X25519_FULL = True
except AttributeError:
CRYPTOGRAPHY_HAS_X25519_FULL = False
except ImportError:
CRYPTOGRAPHY_HAS_X25519 = False
CRYPTOGRAPHY_HAS_X25519_FULL = False
try:
import cryptography.hazmat.primitives.asymmetric.x448
CRYPTOGRAPHY_HAS_X448 = True
except ImportError:
CRYPTOGRAPHY_HAS_X448 = False
try:
import cryptography.hazmat.primitives.asymmetric.ed25519
CRYPTOGRAPHY_HAS_ED25519 = True
except ImportError:
CRYPTOGRAPHY_HAS_ED25519 = False
try:
import cryptography.hazmat.primitives.asymmetric.ed448
CRYPTOGRAPHY_HAS_ED448 = True
except ImportError:
CRYPTOGRAPHY_HAS_ED448 = False
except ImportError:
# Error handled in the calling module.
CRYPTOGRAPHY_HAS_X25519 = False
CRYPTOGRAPHY_HAS_X25519_FULL = False
CRYPTOGRAPHY_HAS_X448 = False
CRYPTOGRAPHY_HAS_ED25519 = False
CRYPTOGRAPHY_HAS_ED448 = False
import abc
import base64
import binascii
import datetime
import errno
import hashlib
import os
import re
import tempfile
from ansible.module_utils import six
from ansible.module_utils._text import to_bytes, to_text
class OpenSSLObjectError(Exception):
pass
class OpenSSLBadPassphraseError(OpenSSLObjectError):
pass
def get_fingerprint_of_bytes(source):
"""Generate the fingerprint of the given bytes."""
fingerprint = {}
try:
algorithms = hashlib.algorithms
except AttributeError:
try:
algorithms = hashlib.algorithms_guaranteed
except AttributeError:
return None
for algo in algorithms:
f = getattr(hashlib, algo)
h = f(source)
try:
# Certain hash functions have a hexdigest() which expects a length parameter
pubkey_digest = h.hexdigest()
except TypeError:
pubkey_digest = h.hexdigest(32)
fingerprint[algo] = ':'.join(pubkey_digest[i:i + 2] for i in range(0, len(pubkey_digest), 2))
return fingerprint
def get_fingerprint(path, passphrase=None, content=None):
"""Generate the fingerprint of the public key. """
privatekey = load_privatekey(path, passphrase=passphrase, content=content, check_passphrase=False)
try:
publickey = crypto.dump_publickey(crypto.FILETYPE_ASN1, privatekey)
except AttributeError:
# If PyOpenSSL < 16.0 crypto.dump_publickey() will fail.
try:
bio = crypto._new_mem_buf()
rc = crypto._lib.i2d_PUBKEY_bio(bio, privatekey._pkey)
if rc != 1:
crypto._raise_current_error()
publickey = crypto._bio_to_string(bio)
except AttributeError:
# By doing this we prevent the code from raising an error
# yet we return no value in the fingerprint hash.
return None
return get_fingerprint_of_bytes(publickey)
def load_file_if_exists(path, module=None, ignore_errors=False):
try:
with open(path, 'rb') as f:
return f.read()
except EnvironmentError as exc:
if exc.errno == errno.ENOENT:
return None
if ignore_errors:
return None
if module is None:
raise
module.fail_json('Error while loading {0} - {1}'.format(path, str(exc)))
except Exception as exc:
if ignore_errors:
return None
if module is None:
raise
module.fail_json('Error while loading {0} - {1}'.format(path, str(exc)))
def load_privatekey(path, passphrase=None, check_passphrase=True, content=None, backend='pyopenssl'):
"""Load the specified OpenSSL private key.
The content can also be specified via content; in that case,
this function will not load the key from disk.
"""
try:
if content is None:
with open(path, 'rb') as b_priv_key_fh:
priv_key_detail = b_priv_key_fh.read()
else:
priv_key_detail = content
if backend == 'pyopenssl':
# First try: try to load with real passphrase (resp. empty string)
# Will work if this is the correct passphrase, or the key is not
# password-protected.
try:
result = crypto.load_privatekey(crypto.FILETYPE_PEM,
priv_key_detail,
to_bytes(passphrase or ''))
except crypto.Error as e:
if len(e.args) > 0 and len(e.args[0]) > 0:
if e.args[0][0][2] in ('bad decrypt', 'bad password read'):
# This happens in case we have the wrong passphrase.
if passphrase is not None:
raise OpenSSLBadPassphraseError('Wrong passphrase provided for private key!')
else:
raise OpenSSLBadPassphraseError('No passphrase provided, but private key is password-protected!')
raise OpenSSLObjectError('Error while deserializing key: {0}'.format(e))
if check_passphrase:
# Next we want to make sure that the key is actually protected by
# a passphrase (in case we did try the empty string before, make
# sure that the key is not protected by the empty string)
try:
crypto.load_privatekey(crypto.FILETYPE_PEM,
priv_key_detail,
to_bytes('y' if passphrase == 'x' else 'x'))
if passphrase is not None:
# Since we can load the key without an exception, the
# key isn't password-protected
raise OpenSSLBadPassphraseError('Passphrase provided, but private key is not password-protected!')
except crypto.Error as e:
if passphrase is None and len(e.args) > 0 and len(e.args[0]) > 0:
if e.args[0][0][2] in ('bad decrypt', 'bad password read'):
# The key is obviously protected by the empty string.
# Don't do this at home (if it's possible at all)...
raise OpenSSLBadPassphraseError('No passphrase provided, but private key is password-protected!')
elif backend == 'cryptography':
try:
result = load_pem_private_key(priv_key_detail,
None if passphrase is None else to_bytes(passphrase),
cryptography_backend())
except TypeError as dummy:
raise OpenSSLBadPassphraseError('Wrong or empty passphrase provided for private key')
except ValueError as dummy:
raise OpenSSLBadPassphraseError('Wrong passphrase provided for private key')
return result
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
def load_certificate(path, content=None, backend='pyopenssl'):
"""Load the specified certificate."""
try:
if content is None:
with open(path, 'rb') as cert_fh:
cert_content = cert_fh.read()
else:
cert_content = content
if backend == 'pyopenssl':
return crypto.load_certificate(crypto.FILETYPE_PEM, cert_content)
elif backend == 'cryptography':
return x509.load_pem_x509_certificate(cert_content, cryptography_backend())
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
def load_certificate_request(path, content=None, backend='pyopenssl'):
"""Load the specified certificate signing request."""
try:
if content is None:
with open(path, 'rb') as csr_fh:
csr_content = csr_fh.read()
else:
csr_content = content
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
if backend == 'pyopenssl':
return crypto.load_certificate_request(crypto.FILETYPE_PEM, csr_content)
elif backend == 'cryptography':
return x509.load_pem_x509_csr(csr_content, cryptography_backend())
def parse_name_field(input_dict):
"""Take a dict with key: value or key: list_of_values mappings and return a list of tuples"""
result = []
for key in input_dict:
if isinstance(input_dict[key], list):
for entry in input_dict[key]:
result.append((key, entry))
else:
result.append((key, input_dict[key]))
return result
def convert_relative_to_datetime(relative_time_string):
"""Get a datetime.datetime or None from a string in the time format described in sshd_config(5)"""
parsed_result = re.match(
r"^(?P<prefix>[+-])((?P<weeks>\d+)[wW])?((?P<days>\d+)[dD])?((?P<hours>\d+)[hH])?((?P<minutes>\d+)[mM])?((?P<seconds>\d+)[sS]?)?$",
relative_time_string)
if parsed_result is None or len(relative_time_string) == 1:
# not matched or only a single "+" or "-"
return None
offset = datetime.timedelta(0)
if parsed_result.group("weeks") is not None:
offset += datetime.timedelta(weeks=int(parsed_result.group("weeks")))
if parsed_result.group("days") is not None:
offset += datetime.timedelta(days=int(parsed_result.group("days")))
if parsed_result.group("hours") is not None:
offset += datetime.timedelta(hours=int(parsed_result.group("hours")))
if parsed_result.group("minutes") is not None:
offset += datetime.timedelta(
minutes=int(parsed_result.group("minutes")))
if parsed_result.group("seconds") is not None:
offset += datetime.timedelta(
seconds=int(parsed_result.group("seconds")))
if parsed_result.group("prefix") == "+":
return datetime.datetime.utcnow() + offset
else:
return datetime.datetime.utcnow() - offset
def select_message_digest(digest_string):
digest = None
if digest_string == 'sha256':
digest = hashes.SHA256()
elif digest_string == 'sha384':
digest = hashes.SHA384()
elif digest_string == 'sha512':
digest = hashes.SHA512()
elif digest_string == 'sha1':
digest = hashes.SHA1()
elif digest_string == 'md5':
digest = hashes.MD5()
return digest
def write_file(module, content, default_mode=None, path=None):
'''
Writes content into destination file as securely as possible.
Uses file arguments from module.
'''
# Find out parameters for file
file_args = module.load_file_common_arguments(module.params)
if file_args['mode'] is None:
file_args['mode'] = default_mode
# If the path was set to override module path
if path is not None:
file_args['path'] = path
# Create tempfile name
tmp_fd, tmp_name = tempfile.mkstemp(prefix=b'.ansible_tmp')
try:
os.close(tmp_fd)
except Exception as dummy:
pass
module.add_cleanup_file(tmp_name) # if we fail, let Ansible try to remove the file
try:
try:
# Create tempfile
file = os.open(tmp_name, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
os.write(file, content)
os.close(file)
except Exception as e:
try:
os.remove(tmp_name)
except Exception as dummy:
pass
module.fail_json(msg='Error while writing result into temporary file: {0}'.format(e))
# Update destination to wanted permissions
if os.path.exists(file_args['path']):
module.set_fs_attributes_if_different(file_args, False)
# Move tempfile to final destination
module.atomic_move(tmp_name, file_args['path'])
# Try to update permissions again
module.set_fs_attributes_if_different(file_args, False)
except Exception as e:
try:
os.remove(tmp_name)
except Exception as dummy:
pass
module.fail_json(msg='Error while writing result: {0}'.format(e))
@six.add_metaclass(abc.ABCMeta)
class OpenSSLObject(object):
def __init__(self, path, state, force, check_mode):
self.path = path
self.state = state
self.force = force
self.name = os.path.basename(path)
self.changed = False
self.check_mode = check_mode
def check(self, module, perms_required=True):
"""Ensure the resource is in its desired state."""
def _check_state():
return os.path.exists(self.path)
def _check_perms(module):
file_args = module.load_file_common_arguments(module.params)
return not module.set_fs_attributes_if_different(file_args, False)
if not perms_required:
return _check_state()
return _check_state() and _check_perms(module)
@abc.abstractmethod
def dump(self):
"""Serialize the object into a dictionary."""
pass
@abc.abstractmethod
def generate(self):
"""Generate the resource."""
pass
def remove(self, module):
"""Remove the resource from the filesystem."""
try:
os.remove(self.path)
self.changed = True
except OSError as exc:
if exc.errno != errno.ENOENT:
raise OpenSSLObjectError(exc)
else:
pass
# #####################################################################################
# #####################################################################################
# This has been extracted from the OpenSSL project's objects.txt:
# https://github.com/openssl/openssl/blob/9537fe5757bb07761fa275d779bbd40bcf5530e4/crypto/objects/objects.txt
# Extracted with https://gist.github.com/felixfontein/376748017ad65ead093d56a45a5bf376
#
# In case the following data structure has any copyrightable content, note that it is licensed as follows:
# Copyright (c) the OpenSSL contributors
# Licensed under the Apache License 2.0
# https://github.com/openssl/openssl/blob/master/LICENSE
_OID_MAP = {
'0': ('itu-t', 'ITU-T', 'ccitt'),
'0.3.4401.5': ('ntt-ds', ),
'0.3.4401.5.3.1.9': ('camellia', ),
'0.3.4401.5.3.1.9.1': ('camellia-128-ecb', 'CAMELLIA-128-ECB'),
'0.3.4401.5.3.1.9.3': ('camellia-128-ofb', 'CAMELLIA-128-OFB'),
'0.3.4401.5.3.1.9.4': ('camellia-128-cfb', 'CAMELLIA-128-CFB'),
'0.3.4401.5.3.1.9.6': ('camellia-128-gcm', 'CAMELLIA-128-GCM'),
'0.3.4401.5.3.1.9.7': ('camellia-128-ccm', 'CAMELLIA-128-CCM'),
'0.3.4401.5.3.1.9.9': ('camellia-128-ctr', 'CAMELLIA-128-CTR'),
'0.3.4401.5.3.1.9.10': ('camellia-128-cmac', 'CAMELLIA-128-CMAC'),
'0.3.4401.5.3.1.9.21': ('camellia-192-ecb', 'CAMELLIA-192-ECB'),
'0.3.4401.5.3.1.9.23': ('camellia-192-ofb', 'CAMELLIA-192-OFB'),
'0.3.4401.5.3.1.9.24': ('camellia-192-cfb', 'CAMELLIA-192-CFB'),
'0.3.4401.5.3.1.9.26': ('camellia-192-gcm', 'CAMELLIA-192-GCM'),
'0.3.4401.5.3.1.9.27': ('camellia-192-ccm', 'CAMELLIA-192-CCM'),
'0.3.4401.5.3.1.9.29': ('camellia-192-ctr', 'CAMELLIA-192-CTR'),
'0.3.4401.5.3.1.9.30': ('camellia-192-cmac', 'CAMELLIA-192-CMAC'),
'0.3.4401.5.3.1.9.41': ('camellia-256-ecb', 'CAMELLIA-256-ECB'),
'0.3.4401.5.3.1.9.43': ('camellia-256-ofb', 'CAMELLIA-256-OFB'),
'0.3.4401.5.3.1.9.44': ('camellia-256-cfb', 'CAMELLIA-256-CFB'),
'0.3.4401.5.3.1.9.46': ('camellia-256-gcm', 'CAMELLIA-256-GCM'),
'0.3.4401.5.3.1.9.47': ('camellia-256-ccm', 'CAMELLIA-256-CCM'),
'0.3.4401.5.3.1.9.49': ('camellia-256-ctr', 'CAMELLIA-256-CTR'),
'0.3.4401.5.3.1.9.50': ('camellia-256-cmac', 'CAMELLIA-256-CMAC'),
'0.9': ('data', ),
'0.9.2342': ('pss', ),
'0.9.2342.19200300': ('ucl', ),
'0.9.2342.19200300.100': ('pilot', ),
'0.9.2342.19200300.100.1': ('pilotAttributeType', ),
'0.9.2342.19200300.100.1.1': ('userId', 'UID'),
'0.9.2342.19200300.100.1.2': ('textEncodedORAddress', ),
'0.9.2342.19200300.100.1.3': ('rfc822Mailbox', 'mail'),
'0.9.2342.19200300.100.1.4': ('info', ),
'0.9.2342.19200300.100.1.5': ('favouriteDrink', ),
'0.9.2342.19200300.100.1.6': ('roomNumber', ),
'0.9.2342.19200300.100.1.7': ('photo', ),
'0.9.2342.19200300.100.1.8': ('userClass', ),
'0.9.2342.19200300.100.1.9': ('host', ),
'0.9.2342.19200300.100.1.10': ('manager', ),
'0.9.2342.19200300.100.1.11': ('documentIdentifier', ),
'0.9.2342.19200300.100.1.12': ('documentTitle', ),
'0.9.2342.19200300.100.1.13': ('documentVersion', ),
'0.9.2342.19200300.100.1.14': ('documentAuthor', ),
'0.9.2342.19200300.100.1.15': ('documentLocation', ),
'0.9.2342.19200300.100.1.20': ('homeTelephoneNumber', ),
'0.9.2342.19200300.100.1.21': ('secretary', ),
'0.9.2342.19200300.100.1.22': ('otherMailbox', ),
'0.9.2342.19200300.100.1.23': ('lastModifiedTime', ),
'0.9.2342.19200300.100.1.24': ('lastModifiedBy', ),
'0.9.2342.19200300.100.1.25': ('domainComponent', 'DC'),
'0.9.2342.19200300.100.1.26': ('aRecord', ),
'0.9.2342.19200300.100.1.27': ('pilotAttributeType27', ),
'0.9.2342.19200300.100.1.28': ('mXRecord', ),
'0.9.2342.19200300.100.1.29': ('nSRecord', ),
'0.9.2342.19200300.100.1.30': ('sOARecord', ),
'0.9.2342.19200300.100.1.31': ('cNAMERecord', ),
'0.9.2342.19200300.100.1.37': ('associatedDomain', ),
'0.9.2342.19200300.100.1.38': ('associatedName', ),
'0.9.2342.19200300.100.1.39': ('homePostalAddress', ),
'0.9.2342.19200300.100.1.40': ('personalTitle', ),
'0.9.2342.19200300.100.1.41': ('mobileTelephoneNumber', ),
'0.9.2342.19200300.100.1.42': ('pagerTelephoneNumber', ),
'0.9.2342.19200300.100.1.43': ('friendlyCountryName', ),
'0.9.2342.19200300.100.1.44': ('uniqueIdentifier', 'uid'),
'0.9.2342.19200300.100.1.45': ('organizationalStatus', ),
'0.9.2342.19200300.100.1.46': ('janetMailbox', ),
'0.9.2342.19200300.100.1.47': ('mailPreferenceOption', ),
'0.9.2342.19200300.100.1.48': ('buildingName', ),
'0.9.2342.19200300.100.1.49': ('dSAQuality', ),
'0.9.2342.19200300.100.1.50': ('singleLevelQuality', ),
'0.9.2342.19200300.100.1.51': ('subtreeMinimumQuality', ),
'0.9.2342.19200300.100.1.52': ('subtreeMaximumQuality', ),
'0.9.2342.19200300.100.1.53': ('personalSignature', ),
'0.9.2342.19200300.100.1.54': ('dITRedirect', ),
'0.9.2342.19200300.100.1.55': ('audio', ),
'0.9.2342.19200300.100.1.56': ('documentPublisher', ),
'0.9.2342.19200300.100.3': ('pilotAttributeSyntax', ),
'0.9.2342.19200300.100.3.4': ('iA5StringSyntax', ),
'0.9.2342.19200300.100.3.5': ('caseIgnoreIA5StringSyntax', ),
'0.9.2342.19200300.100.4': ('pilotObjectClass', ),
'0.9.2342.19200300.100.4.3': ('pilotObject', ),
'0.9.2342.19200300.100.4.4': ('pilotPerson', ),
'0.9.2342.19200300.100.4.5': ('account', ),
'0.9.2342.19200300.100.4.6': ('document', ),
'0.9.2342.19200300.100.4.7': ('room', ),
'0.9.2342.19200300.100.4.9': ('documentSeries', ),
'0.9.2342.19200300.100.4.13': ('Domain', 'domain'),
'0.9.2342.19200300.100.4.14': ('rFC822localPart', ),
'0.9.2342.19200300.100.4.15': ('dNSDomain', ),
'0.9.2342.19200300.100.4.17': ('domainRelatedObject', ),
'0.9.2342.19200300.100.4.18': ('friendlyCountry', ),
'0.9.2342.19200300.100.4.19': ('simpleSecurityObject', ),
'0.9.2342.19200300.100.4.20': ('pilotOrganization', ),
'0.9.2342.19200300.100.4.21': ('pilotDSA', ),
'0.9.2342.19200300.100.4.22': ('qualityLabelledData', ),
'0.9.2342.19200300.100.10': ('pilotGroups', ),
'1': ('iso', 'ISO'),
'1.0.9797.3.4': ('gmac', 'GMAC'),
'1.0.10118.3.0.55': ('whirlpool', ),
'1.2': ('ISO Member Body', 'member-body'),
'1.2.156': ('ISO CN Member Body', 'ISO-CN'),
'1.2.156.10197': ('oscca', ),
'1.2.156.10197.1': ('sm-scheme', ),
'1.2.156.10197.1.104.1': ('sm4-ecb', 'SM4-ECB'),
'1.2.156.10197.1.104.2': ('sm4-cbc', 'SM4-CBC'),
'1.2.156.10197.1.104.3': ('sm4-ofb', 'SM4-OFB'),
'1.2.156.10197.1.104.4': ('sm4-cfb', 'SM4-CFB'),
'1.2.156.10197.1.104.5': ('sm4-cfb1', 'SM4-CFB1'),
'1.2.156.10197.1.104.6': ('sm4-cfb8', 'SM4-CFB8'),
'1.2.156.10197.1.104.7': ('sm4-ctr', 'SM4-CTR'),
'1.2.156.10197.1.301': ('sm2', 'SM2'),
'1.2.156.10197.1.401': ('sm3', 'SM3'),
'1.2.156.10197.1.501': ('SM2-with-SM3', 'SM2-SM3'),
'1.2.156.10197.1.504': ('sm3WithRSAEncryption', 'RSA-SM3'),
'1.2.392.200011.61.1.1.1.2': ('camellia-128-cbc', 'CAMELLIA-128-CBC'),
'1.2.392.200011.61.1.1.1.3': ('camellia-192-cbc', 'CAMELLIA-192-CBC'),
'1.2.392.200011.61.1.1.1.4': ('camellia-256-cbc', 'CAMELLIA-256-CBC'),
'1.2.392.200011.61.1.1.3.2': ('id-camellia128-wrap', ),
'1.2.392.200011.61.1.1.3.3': ('id-camellia192-wrap', ),
'1.2.392.200011.61.1.1.3.4': ('id-camellia256-wrap', ),
'1.2.410.200004': ('kisa', 'KISA'),
'1.2.410.200004.1.3': ('seed-ecb', 'SEED-ECB'),
'1.2.410.200004.1.4': ('seed-cbc', 'SEED-CBC'),
'1.2.410.200004.1.5': ('seed-cfb', 'SEED-CFB'),
'1.2.410.200004.1.6': ('seed-ofb', 'SEED-OFB'),
'1.2.410.200046.1.1': ('aria', ),
'1.2.410.200046.1.1.1': ('aria-128-ecb', 'ARIA-128-ECB'),
'1.2.410.200046.1.1.2': ('aria-128-cbc', 'ARIA-128-CBC'),
'1.2.410.200046.1.1.3': ('aria-128-cfb', 'ARIA-128-CFB'),
'1.2.410.200046.1.1.4': ('aria-128-ofb', 'ARIA-128-OFB'),
'1.2.410.200046.1.1.5': ('aria-128-ctr', 'ARIA-128-CTR'),
'1.2.410.200046.1.1.6': ('aria-192-ecb', 'ARIA-192-ECB'),
'1.2.410.200046.1.1.7': ('aria-192-cbc', 'ARIA-192-CBC'),
'1.2.410.200046.1.1.8': ('aria-192-cfb', 'ARIA-192-CFB'),
'1.2.410.200046.1.1.9': ('aria-192-ofb', 'ARIA-192-OFB'),
'1.2.410.200046.1.1.10': ('aria-192-ctr', 'ARIA-192-CTR'),
'1.2.410.200046.1.1.11': ('aria-256-ecb', 'ARIA-256-ECB'),
'1.2.410.200046.1.1.12': ('aria-256-cbc', 'ARIA-256-CBC'),
'1.2.410.200046.1.1.13': ('aria-256-cfb', 'ARIA-256-CFB'),
'1.2.410.200046.1.1.14': ('aria-256-ofb', 'ARIA-256-OFB'),
'1.2.410.200046.1.1.15': ('aria-256-ctr', 'ARIA-256-CTR'),
'1.2.410.200046.1.1.34': ('aria-128-gcm', 'ARIA-128-GCM'),
'1.2.410.200046.1.1.35': ('aria-192-gcm', 'ARIA-192-GCM'),
'1.2.410.200046.1.1.36': ('aria-256-gcm', 'ARIA-256-GCM'),
'1.2.410.200046.1.1.37': ('aria-128-ccm', 'ARIA-128-CCM'),
'1.2.410.200046.1.1.38': ('aria-192-ccm', 'ARIA-192-CCM'),
'1.2.410.200046.1.1.39': ('aria-256-ccm', 'ARIA-256-CCM'),
'1.2.643.2.2': ('cryptopro', ),
'1.2.643.2.2.3': ('GOST R 34.11-94 with GOST R 34.10-2001', 'id-GostR3411-94-with-GostR3410-2001'),
'1.2.643.2.2.4': ('GOST R 34.11-94 with GOST R 34.10-94', 'id-GostR3411-94-with-GostR3410-94'),
'1.2.643.2.2.9': ('GOST R 34.11-94', 'md_gost94'),
'1.2.643.2.2.10': ('HMAC GOST 34.11-94', 'id-HMACGostR3411-94'),
'1.2.643.2.2.14.0': ('id-Gost28147-89-None-KeyMeshing', ),
'1.2.643.2.2.14.1': ('id-Gost28147-89-CryptoPro-KeyMeshing', ),
'1.2.643.2.2.19': ('GOST R 34.10-2001', 'gost2001'),
'1.2.643.2.2.20': ('GOST R 34.10-94', 'gost94'),
'1.2.643.2.2.20.1': ('id-GostR3410-94-a', ),
'1.2.643.2.2.20.2': ('id-GostR3410-94-aBis', ),
'1.2.643.2.2.20.3': ('id-GostR3410-94-b', ),
'1.2.643.2.2.20.4': ('id-GostR3410-94-bBis', ),
'1.2.643.2.2.21': ('GOST 28147-89', 'gost89'),
'1.2.643.2.2.22': ('GOST 28147-89 MAC', 'gost-mac'),
'1.2.643.2.2.23': ('GOST R 34.11-94 PRF', 'prf-gostr3411-94'),
'1.2.643.2.2.30.0': ('id-GostR3411-94-TestParamSet', ),
'1.2.643.2.2.30.1': ('id-GostR3411-94-CryptoProParamSet', ),
'1.2.643.2.2.31.0': ('id-Gost28147-89-TestParamSet', ),
'1.2.643.2.2.31.1': ('id-Gost28147-89-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.31.2': ('id-Gost28147-89-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.31.3': ('id-Gost28147-89-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.31.4': ('id-Gost28147-89-CryptoPro-D-ParamSet', ),
'1.2.643.2.2.31.5': ('id-Gost28147-89-CryptoPro-Oscar-1-1-ParamSet', ),
'1.2.643.2.2.31.6': ('id-Gost28147-89-CryptoPro-Oscar-1-0-ParamSet', ),
'1.2.643.2.2.31.7': ('id-Gost28147-89-CryptoPro-RIC-1-ParamSet', ),
'1.2.643.2.2.32.0': ('id-GostR3410-94-TestParamSet', ),
'1.2.643.2.2.32.2': ('id-GostR3410-94-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.32.3': ('id-GostR3410-94-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.32.4': ('id-GostR3410-94-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.32.5': ('id-GostR3410-94-CryptoPro-D-ParamSet', ),
'1.2.643.2.2.33.1': ('id-GostR3410-94-CryptoPro-XchA-ParamSet', ),
'1.2.643.2.2.33.2': ('id-GostR3410-94-CryptoPro-XchB-ParamSet', ),
'1.2.643.2.2.33.3': ('id-GostR3410-94-CryptoPro-XchC-ParamSet', ),
'1.2.643.2.2.35.0': ('id-GostR3410-2001-TestParamSet', ),
'1.2.643.2.2.35.1': ('id-GostR3410-2001-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.35.2': ('id-GostR3410-2001-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.35.3': ('id-GostR3410-2001-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.36.0': ('id-GostR3410-2001-CryptoPro-XchA-ParamSet', ),
'1.2.643.2.2.36.1': ('id-GostR3410-2001-CryptoPro-XchB-ParamSet', ),
'1.2.643.2.2.98': ('GOST R 34.10-2001 DH', 'id-GostR3410-2001DH'),
'1.2.643.2.2.99': ('GOST R 34.10-94 DH', 'id-GostR3410-94DH'),
'1.2.643.2.9': ('cryptocom', ),
'1.2.643.2.9.1.3.3': ('GOST R 34.11-94 with GOST R 34.10-94 Cryptocom', 'id-GostR3411-94-with-GostR3410-94-cc'),
'1.2.643.2.9.1.3.4': ('GOST R 34.11-94 with GOST R 34.10-2001 Cryptocom', 'id-GostR3411-94-with-GostR3410-2001-cc'),
'1.2.643.2.9.1.5.3': ('GOST 34.10-94 Cryptocom', 'gost94cc'),
'1.2.643.2.9.1.5.4': ('GOST 34.10-2001 Cryptocom', 'gost2001cc'),
'1.2.643.2.9.1.6.1': ('GOST 28147-89 Cryptocom ParamSet', 'id-Gost28147-89-cc'),
'1.2.643.2.9.1.8.1': ('GOST R 3410-2001 Parameter Set Cryptocom', 'id-GostR3410-2001-ParamSet-cc'),
'1.2.643.3.131.1.1': ('INN', 'INN'),
'1.2.643.7.1': ('id-tc26', ),
'1.2.643.7.1.1': ('id-tc26-algorithms', ),
'1.2.643.7.1.1.1': ('id-tc26-sign', ),
'1.2.643.7.1.1.1.1': ('GOST R 34.10-2012 with 256 bit modulus', 'gost2012_256'),
'1.2.643.7.1.1.1.2': ('GOST R 34.10-2012 with 512 bit modulus', 'gost2012_512'),
'1.2.643.7.1.1.2': ('id-tc26-digest', ),
'1.2.643.7.1.1.2.2': ('GOST R 34.11-2012 with 256 bit hash', 'md_gost12_256'),
'1.2.643.7.1.1.2.3': ('GOST R 34.11-2012 with 512 bit hash', 'md_gost12_512'),
'1.2.643.7.1.1.3': ('id-tc26-signwithdigest', ),
'1.2.643.7.1.1.3.2': ('GOST R 34.10-2012 with GOST R 34.11-2012 (256 bit)', 'id-tc26-signwithdigest-gost3410-2012-256'),
'1.2.643.7.1.1.3.3': ('GOST R 34.10-2012 with GOST R 34.11-2012 (512 bit)', 'id-tc26-signwithdigest-gost3410-2012-512'),
'1.2.643.7.1.1.4': ('id-tc26-mac', ),
'1.2.643.7.1.1.4.1': ('HMAC GOST 34.11-2012 256 bit', 'id-tc26-hmac-gost-3411-2012-256'),
'1.2.643.7.1.1.4.2': ('HMAC GOST 34.11-2012 512 bit', 'id-tc26-hmac-gost-3411-2012-512'),
'1.2.643.7.1.1.5': ('id-tc26-cipher', ),
'1.2.643.7.1.1.5.1': ('id-tc26-cipher-gostr3412-2015-magma', ),
'1.2.643.7.1.1.5.1.1': ('id-tc26-cipher-gostr3412-2015-magma-ctracpkm', ),
'1.2.643.7.1.1.5.1.2': ('id-tc26-cipher-gostr3412-2015-magma-ctracpkm-omac', ),
'1.2.643.7.1.1.5.2': ('id-tc26-cipher-gostr3412-2015-kuznyechik', ),
'1.2.643.7.1.1.5.2.1': ('id-tc26-cipher-gostr3412-2015-kuznyechik-ctracpkm', ),
'1.2.643.7.1.1.5.2.2': ('id-tc26-cipher-gostr3412-2015-kuznyechik-ctracpkm-omac', ),
'1.2.643.7.1.1.6': ('id-tc26-agreement', ),
'1.2.643.7.1.1.6.1': ('id-tc26-agreement-gost-3410-2012-256', ),
'1.2.643.7.1.1.6.2': ('id-tc26-agreement-gost-3410-2012-512', ),
'1.2.643.7.1.1.7': ('id-tc26-wrap', ),
'1.2.643.7.1.1.7.1': ('id-tc26-wrap-gostr3412-2015-magma', ),
'1.2.643.7.1.1.7.1.1': ('id-tc26-wrap-gostr3412-2015-magma-kexp15', 'id-tc26-wrap-gostr3412-2015-kuznyechik-kexp15'),
'1.2.643.7.1.1.7.2': ('id-tc26-wrap-gostr3412-2015-kuznyechik', ),
'1.2.643.7.1.2': ('id-tc26-constants', ),
'1.2.643.7.1.2.1': ('id-tc26-sign-constants', ),
'1.2.643.7.1.2.1.1': ('id-tc26-gost-3410-2012-256-constants', ),
'1.2.643.7.1.2.1.1.1': ('GOST R 34.10-2012 (256 bit) ParamSet A', 'id-tc26-gost-3410-2012-256-paramSetA'),
'1.2.643.7.1.2.1.1.2': ('GOST R 34.10-2012 (256 bit) ParamSet B', 'id-tc26-gost-3410-2012-256-paramSetB'),
'1.2.643.7.1.2.1.1.3': ('GOST R 34.10-2012 (256 bit) ParamSet C', 'id-tc26-gost-3410-2012-256-paramSetC'),
'1.2.643.7.1.2.1.1.4': ('GOST R 34.10-2012 (256 bit) ParamSet D', 'id-tc26-gost-3410-2012-256-paramSetD'),
'1.2.643.7.1.2.1.2': ('id-tc26-gost-3410-2012-512-constants', ),
'1.2.643.7.1.2.1.2.0': ('GOST R 34.10-2012 (512 bit) testing parameter set', 'id-tc26-gost-3410-2012-512-paramSetTest'),
'1.2.643.7.1.2.1.2.1': ('GOST R 34.10-2012 (512 bit) ParamSet A', 'id-tc26-gost-3410-2012-512-paramSetA'),
'1.2.643.7.1.2.1.2.2': ('GOST R 34.10-2012 (512 bit) ParamSet B', 'id-tc26-gost-3410-2012-512-paramSetB'),
'1.2.643.7.1.2.1.2.3': ('GOST R 34.10-2012 (512 bit) ParamSet C', 'id-tc26-gost-3410-2012-512-paramSetC'),
'1.2.643.7.1.2.2': ('id-tc26-digest-constants', ),
'1.2.643.7.1.2.5': ('id-tc26-cipher-constants', ),
'1.2.643.7.1.2.5.1': ('id-tc26-gost-28147-constants', ),
'1.2.643.7.1.2.5.1.1': ('GOST 28147-89 TC26 parameter set', 'id-tc26-gost-28147-param-Z'),
'1.2.643.100.1': ('OGRN', 'OGRN'),
'1.2.643.100.3': ('SNILS', 'SNILS'),
'1.2.643.100.111': ('Signing Tool of Subject', 'subjectSignTool'),
'1.2.643.100.112': ('Signing Tool of Issuer', 'issuerSignTool'),
'1.2.804': ('ISO-UA', ),
'1.2.804.2.1.1.1': ('ua-pki', ),
'1.2.804.2.1.1.1.1.1.1': ('DSTU Gost 28147-2009', 'dstu28147'),
'1.2.804.2.1.1.1.1.1.1.2': ('DSTU Gost 28147-2009 OFB mode', 'dstu28147-ofb'),
'1.2.804.2.1.1.1.1.1.1.3': ('DSTU Gost 28147-2009 CFB mode', 'dstu28147-cfb'),
'1.2.804.2.1.1.1.1.1.1.5': ('DSTU Gost 28147-2009 key wrap', 'dstu28147-wrap'),
'1.2.804.2.1.1.1.1.1.2': ('HMAC DSTU Gost 34311-95', 'hmacWithDstu34311'),
'1.2.804.2.1.1.1.1.2.1': ('DSTU Gost 34311-95', 'dstu34311'),
'1.2.804.2.1.1.1.1.3.1.1': ('DSTU 4145-2002 little endian', 'dstu4145le'),
'1.2.804.2.1.1.1.1.3.1.1.1.1': ('DSTU 4145-2002 big endian', 'dstu4145be'),
'1.2.804.2.1.1.1.1.3.1.1.2.0': ('DSTU curve 0', 'uacurve0'),
'1.2.804.2.1.1.1.1.3.1.1.2.1': ('DSTU curve 1', 'uacurve1'),
'1.2.804.2.1.1.1.1.3.1.1.2.2': ('DSTU curve 2', 'uacurve2'),
'1.2.804.2.1.1.1.1.3.1.1.2.3': ('DSTU curve 3', 'uacurve3'),
'1.2.804.2.1.1.1.1.3.1.1.2.4': ('DSTU curve 4', 'uacurve4'),
'1.2.804.2.1.1.1.1.3.1.1.2.5': ('DSTU curve 5', 'uacurve5'),
'1.2.804.2.1.1.1.1.3.1.1.2.6': ('DSTU curve 6', 'uacurve6'),
'1.2.804.2.1.1.1.1.3.1.1.2.7': ('DSTU curve 7', 'uacurve7'),
'1.2.804.2.1.1.1.1.3.1.1.2.8': ('DSTU curve 8', 'uacurve8'),
'1.2.804.2.1.1.1.1.3.1.1.2.9': ('DSTU curve 9', 'uacurve9'),
'1.2.840': ('ISO US Member Body', 'ISO-US'),
'1.2.840.10040': ('X9.57', 'X9-57'),
'1.2.840.10040.2': ('holdInstruction', ),
'1.2.840.10040.2.1': ('Hold Instruction None', 'holdInstructionNone'),
'1.2.840.10040.2.2': ('Hold Instruction Call Issuer', 'holdInstructionCallIssuer'),
'1.2.840.10040.2.3': ('Hold Instruction Reject', 'holdInstructionReject'),
'1.2.840.10040.4': ('X9.57 CM ?', 'X9cm'),
'1.2.840.10040.4.1': ('dsaEncryption', 'DSA'),
'1.2.840.10040.4.3': ('dsaWithSHA1', 'DSA-SHA1'),
'1.2.840.10045': ('ANSI X9.62', 'ansi-X9-62'),
'1.2.840.10045.1': ('id-fieldType', ),
'1.2.840.10045.1.1': ('prime-field', ),
'1.2.840.10045.1.2': ('characteristic-two-field', ),
'1.2.840.10045.1.2.3': ('id-characteristic-two-basis', ),
'1.2.840.10045.1.2.3.1': ('onBasis', ),
'1.2.840.10045.1.2.3.2': ('tpBasis', ),
'1.2.840.10045.1.2.3.3': ('ppBasis', ),
'1.2.840.10045.2': ('id-publicKeyType', ),
'1.2.840.10045.2.1': ('id-ecPublicKey', ),
'1.2.840.10045.3': ('ellipticCurve', ),
'1.2.840.10045.3.0': ('c-TwoCurve', ),
'1.2.840.10045.3.0.1': ('c2pnb163v1', ),
'1.2.840.10045.3.0.2': ('c2pnb163v2', ),
'1.2.840.10045.3.0.3': ('c2pnb163v3', ),
'1.2.840.10045.3.0.4': ('c2pnb176v1', ),
'1.2.840.10045.3.0.5': ('c2tnb191v1', ),
'1.2.840.10045.3.0.6': ('c2tnb191v2', ),
'1.2.840.10045.3.0.7': ('c2tnb191v3', ),
'1.2.840.10045.3.0.8': ('c2onb191v4', ),
'1.2.840.10045.3.0.9': ('c2onb191v5', ),
'1.2.840.10045.3.0.10': ('c2pnb208w1', ),
'1.2.840.10045.3.0.11': ('c2tnb239v1', ),
'1.2.840.10045.3.0.12': ('c2tnb239v2', ),
'1.2.840.10045.3.0.13': ('c2tnb239v3', ),
'1.2.840.10045.3.0.14': ('c2onb239v4', ),
'1.2.840.10045.3.0.15': ('c2onb239v5', ),
'1.2.840.10045.3.0.16': ('c2pnb272w1', ),
'1.2.840.10045.3.0.17': ('c2pnb304w1', ),
'1.2.840.10045.3.0.18': ('c2tnb359v1', ),
'1.2.840.10045.3.0.19': ('c2pnb368w1', ),
'1.2.840.10045.3.0.20': ('c2tnb431r1', ),
'1.2.840.10045.3.1': ('primeCurve', ),
'1.2.840.10045.3.1.1': ('prime192v1', ),
'1.2.840.10045.3.1.2': ('prime192v2', ),
'1.2.840.10045.3.1.3': ('prime192v3', ),
'1.2.840.10045.3.1.4': ('prime239v1', ),
'1.2.840.10045.3.1.5': ('prime239v2', ),
'1.2.840.10045.3.1.6': ('prime239v3', ),
'1.2.840.10045.3.1.7': ('prime256v1', ),
'1.2.840.10045.4': ('id-ecSigType', ),
'1.2.840.10045.4.1': ('ecdsa-with-SHA1', ),
'1.2.840.10045.4.2': ('ecdsa-with-Recommended', ),
'1.2.840.10045.4.3': ('ecdsa-with-Specified', ),
'1.2.840.10045.4.3.1': ('ecdsa-with-SHA224', ),
'1.2.840.10045.4.3.2': ('ecdsa-with-SHA256', ),
'1.2.840.10045.4.3.3': ('ecdsa-with-SHA384', ),
'1.2.840.10045.4.3.4': ('ecdsa-with-SHA512', ),
'1.2.840.10046.2.1': ('X9.42 DH', 'dhpublicnumber'),
'1.2.840.113533.7.66.10': ('cast5-cbc', 'CAST5-CBC'),
'1.2.840.113533.7.66.12': ('pbeWithMD5AndCast5CBC', ),
'1.2.840.113533.7.66.13': ('password based MAC', 'id-PasswordBasedMAC'),
'1.2.840.113533.7.66.30': ('Diffie-Hellman based MAC', 'id-DHBasedMac'),
'1.2.840.113549': ('RSA Data Security, Inc.', 'rsadsi'),
'1.2.840.113549.1': ('RSA Data Security, Inc. PKCS', 'pkcs'),
'1.2.840.113549.1.1': ('pkcs1', ),
'1.2.840.113549.1.1.1': ('rsaEncryption', ),
'1.2.840.113549.1.1.2': ('md2WithRSAEncryption', 'RSA-MD2'),
'1.2.840.113549.1.1.3': ('md4WithRSAEncryption', 'RSA-MD4'),
'1.2.840.113549.1.1.4': ('md5WithRSAEncryption', 'RSA-MD5'),
'1.2.840.113549.1.1.5': ('sha1WithRSAEncryption', 'RSA-SHA1'),
'1.2.840.113549.1.1.6': ('rsaOAEPEncryptionSET', ),
'1.2.840.113549.1.1.7': ('rsaesOaep', 'RSAES-OAEP'),
'1.2.840.113549.1.1.8': ('mgf1', 'MGF1'),
'1.2.840.113549.1.1.9': ('pSpecified', 'PSPECIFIED'),
'1.2.840.113549.1.1.10': ('rsassaPss', 'RSASSA-PSS'),
'1.2.840.113549.1.1.11': ('sha256WithRSAEncryption', 'RSA-SHA256'),
'1.2.840.113549.1.1.12': ('sha384WithRSAEncryption', 'RSA-SHA384'),
'1.2.840.113549.1.1.13': ('sha512WithRSAEncryption', 'RSA-SHA512'),
'1.2.840.113549.1.1.14': ('sha224WithRSAEncryption', 'RSA-SHA224'),
'1.2.840.113549.1.1.15': ('sha512-224WithRSAEncryption', 'RSA-SHA512/224'),
'1.2.840.113549.1.1.16': ('sha512-256WithRSAEncryption', 'RSA-SHA512/256'),
'1.2.840.113549.1.3': ('pkcs3', ),
'1.2.840.113549.1.3.1': ('dhKeyAgreement', ),
'1.2.840.113549.1.5': ('pkcs5', ),
'1.2.840.113549.1.5.1': ('pbeWithMD2AndDES-CBC', 'PBE-MD2-DES'),
'1.2.840.113549.1.5.3': ('pbeWithMD5AndDES-CBC', 'PBE-MD5-DES'),
'1.2.840.113549.1.5.4': ('pbeWithMD2AndRC2-CBC', 'PBE-MD2-RC2-64'),
'1.2.840.113549.1.5.6': ('pbeWithMD5AndRC2-CBC', 'PBE-MD5-RC2-64'),
'1.2.840.113549.1.5.10': ('pbeWithSHA1AndDES-CBC', 'PBE-SHA1-DES'),
'1.2.840.113549.1.5.11': ('pbeWithSHA1AndRC2-CBC', 'PBE-SHA1-RC2-64'),
'1.2.840.113549.1.5.12': ('PBKDF2', ),
'1.2.840.113549.1.5.13': ('PBES2', ),
'1.2.840.113549.1.5.14': ('PBMAC1', ),
'1.2.840.113549.1.7': ('pkcs7', ),
'1.2.840.113549.1.7.1': ('pkcs7-data', ),
'1.2.840.113549.1.7.2': ('pkcs7-signedData', ),
'1.2.840.113549.1.7.3': ('pkcs7-envelopedData', ),
'1.2.840.113549.1.7.4': ('pkcs7-signedAndEnvelopedData', ),
'1.2.840.113549.1.7.5': ('pkcs7-digestData', ),
'1.2.840.113549.1.7.6': ('pkcs7-encryptedData', ),
'1.2.840.113549.1.9': ('pkcs9', ),
'1.2.840.113549.1.9.1': ('emailAddress', ),
'1.2.840.113549.1.9.2': ('unstructuredName', ),
'1.2.840.113549.1.9.3': ('contentType', ),
'1.2.840.113549.1.9.4': ('messageDigest', ),
'1.2.840.113549.1.9.5': ('signingTime', ),
'1.2.840.113549.1.9.6': ('countersignature', ),
'1.2.840.113549.1.9.7': ('challengePassword', ),
'1.2.840.113549.1.9.8': ('unstructuredAddress', ),
'1.2.840.113549.1.9.9': ('extendedCertificateAttributes', ),
'1.2.840.113549.1.9.14': ('Extension Request', 'extReq'),
'1.2.840.113549.1.9.15': ('S/MIME Capabilities', 'SMIME-CAPS'),
'1.2.840.113549.1.9.16': ('S/MIME', 'SMIME'),
'1.2.840.113549.1.9.16.0': ('id-smime-mod', ),
'1.2.840.113549.1.9.16.0.1': ('id-smime-mod-cms', ),
'1.2.840.113549.1.9.16.0.2': ('id-smime-mod-ess', ),
'1.2.840.113549.1.9.16.0.3': ('id-smime-mod-oid', ),
'1.2.840.113549.1.9.16.0.4': ('id-smime-mod-msg-v3', ),
'1.2.840.113549.1.9.16.0.5': ('id-smime-mod-ets-eSignature-88', ),
'1.2.840.113549.1.9.16.0.6': ('id-smime-mod-ets-eSignature-97', ),
'1.2.840.113549.1.9.16.0.7': ('id-smime-mod-ets-eSigPolicy-88', ),
'1.2.840.113549.1.9.16.0.8': ('id-smime-mod-ets-eSigPolicy-97', ),
'1.2.840.113549.1.9.16.1': ('id-smime-ct', ),
'1.2.840.113549.1.9.16.1.1': ('id-smime-ct-receipt', ),
'1.2.840.113549.1.9.16.1.2': ('id-smime-ct-authData', ),
'1.2.840.113549.1.9.16.1.3': ('id-smime-ct-publishCert', ),
'1.2.840.113549.1.9.16.1.4': ('id-smime-ct-TSTInfo', ),
'1.2.840.113549.1.9.16.1.5': ('id-smime-ct-TDTInfo', ),
'1.2.840.113549.1.9.16.1.6': ('id-smime-ct-contentInfo', ),
'1.2.840.113549.1.9.16.1.7': ('id-smime-ct-DVCSRequestData', ),
'1.2.840.113549.1.9.16.1.8': ('id-smime-ct-DVCSResponseData', ),
'1.2.840.113549.1.9.16.1.9': ('id-smime-ct-compressedData', ),
'1.2.840.113549.1.9.16.1.19': ('id-smime-ct-contentCollection', ),
'1.2.840.113549.1.9.16.1.23': ('id-smime-ct-authEnvelopedData', ),
'1.2.840.113549.1.9.16.1.27': ('id-ct-asciiTextWithCRLF', ),
'1.2.840.113549.1.9.16.1.28': ('id-ct-xml', ),
'1.2.840.113549.1.9.16.2': ('id-smime-aa', ),
'1.2.840.113549.1.9.16.2.1': ('id-smime-aa-receiptRequest', ),
'1.2.840.113549.1.9.16.2.2': ('id-smime-aa-securityLabel', ),
'1.2.840.113549.1.9.16.2.3': ('id-smime-aa-mlExpandHistory', ),
'1.2.840.113549.1.9.16.2.4': ('id-smime-aa-contentHint', ),
'1.2.840.113549.1.9.16.2.5': ('id-smime-aa-msgSigDigest', ),
'1.2.840.113549.1.9.16.2.6': ('id-smime-aa-encapContentType', ),
'1.2.840.113549.1.9.16.2.7': ('id-smime-aa-contentIdentifier', ),
'1.2.840.113549.1.9.16.2.8': ('id-smime-aa-macValue', ),
'1.2.840.113549.1.9.16.2.9': ('id-smime-aa-equivalentLabels', ),
'1.2.840.113549.1.9.16.2.10': ('id-smime-aa-contentReference', ),
'1.2.840.113549.1.9.16.2.11': ('id-smime-aa-encrypKeyPref', ),
'1.2.840.113549.1.9.16.2.12': ('id-smime-aa-signingCertificate', ),
'1.2.840.113549.1.9.16.2.13': ('id-smime-aa-smimeEncryptCerts', ),
'1.2.840.113549.1.9.16.2.14': ('id-smime-aa-timeStampToken', ),
'1.2.840.113549.1.9.16.2.15': ('id-smime-aa-ets-sigPolicyId', ),
'1.2.840.113549.1.9.16.2.16': ('id-smime-aa-ets-commitmentType', ),
'1.2.840.113549.1.9.16.2.17': ('id-smime-aa-ets-signerLocation', ),
'1.2.840.113549.1.9.16.2.18': ('id-smime-aa-ets-signerAttr', ),
'1.2.840.113549.1.9.16.2.19': ('id-smime-aa-ets-otherSigCert', ),
'1.2.840.113549.1.9.16.2.20': ('id-smime-aa-ets-contentTimestamp', ),
'1.2.840.113549.1.9.16.2.21': ('id-smime-aa-ets-CertificateRefs', ),
'1.2.840.113549.1.9.16.2.22': ('id-smime-aa-ets-RevocationRefs', ),
'1.2.840.113549.1.9.16.2.23': ('id-smime-aa-ets-certValues', ),
'1.2.840.113549.1.9.16.2.24': ('id-smime-aa-ets-revocationValues', ),
'1.2.840.113549.1.9.16.2.25': ('id-smime-aa-ets-escTimeStamp', ),
'1.2.840.113549.1.9.16.2.26': ('id-smime-aa-ets-certCRLTimestamp', ),
'1.2.840.113549.1.9.16.2.27': ('id-smime-aa-ets-archiveTimeStamp', ),
'1.2.840.113549.1.9.16.2.28': ('id-smime-aa-signatureType', ),
'1.2.840.113549.1.9.16.2.29': ('id-smime-aa-dvcs-dvc', ),
'1.2.840.113549.1.9.16.2.47': ('id-smime-aa-signingCertificateV2', ),
'1.2.840.113549.1.9.16.3': ('id-smime-alg', ),
'1.2.840.113549.1.9.16.3.1': ('id-smime-alg-ESDHwith3DES', ),
'1.2.840.113549.1.9.16.3.2': ('id-smime-alg-ESDHwithRC2', ),
'1.2.840.113549.1.9.16.3.3': ('id-smime-alg-3DESwrap', ),
'1.2.840.113549.1.9.16.3.4': ('id-smime-alg-RC2wrap', ),
'1.2.840.113549.1.9.16.3.5': ('id-smime-alg-ESDH', ),
'1.2.840.113549.1.9.16.3.6': ('id-smime-alg-CMS3DESwrap', ),
'1.2.840.113549.1.9.16.3.7': ('id-smime-alg-CMSRC2wrap', ),
'1.2.840.113549.1.9.16.3.8': ('zlib compression', 'ZLIB'),
'1.2.840.113549.1.9.16.3.9': ('id-alg-PWRI-KEK', ),
'1.2.840.113549.1.9.16.4': ('id-smime-cd', ),
'1.2.840.113549.1.9.16.4.1': ('id-smime-cd-ldap', ),
'1.2.840.113549.1.9.16.5': ('id-smime-spq', ),
'1.2.840.113549.1.9.16.5.1': ('id-smime-spq-ets-sqt-uri', ),
'1.2.840.113549.1.9.16.5.2': ('id-smime-spq-ets-sqt-unotice', ),
'1.2.840.113549.1.9.16.6': ('id-smime-cti', ),
'1.2.840.113549.1.9.16.6.1': ('id-smime-cti-ets-proofOfOrigin', ),
'1.2.840.113549.1.9.16.6.2': ('id-smime-cti-ets-proofOfReceipt', ),
'1.2.840.113549.1.9.16.6.3': ('id-smime-cti-ets-proofOfDelivery', ),
'1.2.840.113549.1.9.16.6.4': ('id-smime-cti-ets-proofOfSender', ),
'1.2.840.113549.1.9.16.6.5': ('id-smime-cti-ets-proofOfApproval', ),
'1.2.840.113549.1.9.16.6.6': ('id-smime-cti-ets-proofOfCreation', ),
'1.2.840.113549.1.9.20': ('friendlyName', ),
'1.2.840.113549.1.9.21': ('localKeyID', ),
'1.2.840.113549.1.9.22': ('certTypes', ),
'1.2.840.113549.1.9.22.1': ('x509Certificate', ),
'1.2.840.113549.1.9.22.2': ('sdsiCertificate', ),
'1.2.840.113549.1.9.23': ('crlTypes', ),
'1.2.840.113549.1.9.23.1': ('x509Crl', ),
'1.2.840.113549.1.12': ('pkcs12', ),
'1.2.840.113549.1.12.1': ('pkcs12-pbeids', ),
'1.2.840.113549.1.12.1.1': ('pbeWithSHA1And128BitRC4', 'PBE-SHA1-RC4-128'),
'1.2.840.113549.1.12.1.2': ('pbeWithSHA1And40BitRC4', 'PBE-SHA1-RC4-40'),
'1.2.840.113549.1.12.1.3': ('pbeWithSHA1And3-KeyTripleDES-CBC', 'PBE-SHA1-3DES'),
'1.2.840.113549.1.12.1.4': ('pbeWithSHA1And2-KeyTripleDES-CBC', 'PBE-SHA1-2DES'),
'1.2.840.113549.1.12.1.5': ('pbeWithSHA1And128BitRC2-CBC', 'PBE-SHA1-RC2-128'),
'1.2.840.113549.1.12.1.6': ('pbeWithSHA1And40BitRC2-CBC', 'PBE-SHA1-RC2-40'),
'1.2.840.113549.1.12.10': ('pkcs12-Version1', ),
'1.2.840.113549.1.12.10.1': ('pkcs12-BagIds', ),
'1.2.840.113549.1.12.10.1.1': ('keyBag', ),
'1.2.840.113549.1.12.10.1.2': ('pkcs8ShroudedKeyBag', ),
'1.2.840.113549.1.12.10.1.3': ('certBag', ),
'1.2.840.113549.1.12.10.1.4': ('crlBag', ),
'1.2.840.113549.1.12.10.1.5': ('secretBag', ),
'1.2.840.113549.1.12.10.1.6': ('safeContentsBag', ),
'1.2.840.113549.2.2': ('md2', 'MD2'),
'1.2.840.113549.2.4': ('md4', 'MD4'),
'1.2.840.113549.2.5': ('md5', 'MD5'),
'1.2.840.113549.2.6': ('hmacWithMD5', ),
'1.2.840.113549.2.7': ('hmacWithSHA1', ),
'1.2.840.113549.2.8': ('hmacWithSHA224', ),
'1.2.840.113549.2.9': ('hmacWithSHA256', ),
'1.2.840.113549.2.10': ('hmacWithSHA384', ),
'1.2.840.113549.2.11': ('hmacWithSHA512', ),
'1.2.840.113549.2.12': ('hmacWithSHA512-224', ),
'1.2.840.113549.2.13': ('hmacWithSHA512-256', ),
'1.2.840.113549.3.2': ('rc2-cbc', 'RC2-CBC'),
'1.2.840.113549.3.4': ('rc4', 'RC4'),
'1.2.840.113549.3.7': ('des-ede3-cbc', 'DES-EDE3-CBC'),
'1.2.840.113549.3.8': ('rc5-cbc', 'RC5-CBC'),
'1.2.840.113549.3.10': ('des-cdmf', 'DES-CDMF'),
'1.3': ('identified-organization', 'org', 'ORG'),
'1.3.6': ('dod', 'DOD'),
'1.3.6.1': ('iana', 'IANA', 'internet'),
'1.3.6.1.1': ('Directory', 'directory'),
'1.3.6.1.2': ('Management', 'mgmt'),
'1.3.6.1.3': ('Experimental', 'experimental'),
'1.3.6.1.4': ('Private', 'private'),
'1.3.6.1.4.1': ('Enterprises', 'enterprises'),
'1.3.6.1.4.1.188.7.1.1.2': ('idea-cbc', 'IDEA-CBC'),
'1.3.6.1.4.1.311.2.1.14': ('Microsoft Extension Request', 'msExtReq'),
'1.3.6.1.4.1.311.2.1.21': ('Microsoft Individual Code Signing', 'msCodeInd'),
'1.3.6.1.4.1.311.2.1.22': ('Microsoft Commercial Code Signing', 'msCodeCom'),
'1.3.6.1.4.1.311.10.3.1': ('Microsoft Trust List Signing', 'msCTLSign'),
'1.3.6.1.4.1.311.10.3.3': ('Microsoft Server Gated Crypto', 'msSGC'),
'1.3.6.1.4.1.311.10.3.4': ('Microsoft Encrypted File System', 'msEFS'),
'1.3.6.1.4.1.311.17.1': ('Microsoft CSP Name', 'CSPName'),
'1.3.6.1.4.1.311.17.2': ('Microsoft Local Key set', 'LocalKeySet'),
'1.3.6.1.4.1.311.20.2.2': ('Microsoft Smartcardlogin', 'msSmartcardLogin'),
'1.3.6.1.4.1.311.20.2.3': ('Microsoft Universal Principal Name', 'msUPN'),
'1.3.6.1.4.1.311.60.2.1.1': ('jurisdictionLocalityName', 'jurisdictionL'),
'1.3.6.1.4.1.311.60.2.1.2': ('jurisdictionStateOrProvinceName', 'jurisdictionST'),
'1.3.6.1.4.1.311.60.2.1.3': ('jurisdictionCountryName', 'jurisdictionC'),
'1.3.6.1.4.1.1466.344': ('dcObject', 'dcobject'),
'1.3.6.1.4.1.1722.12.2.1.16': ('blake2b512', 'BLAKE2b512'),
'1.3.6.1.4.1.1722.12.2.2.8': ('blake2s256', 'BLAKE2s256'),
'1.3.6.1.4.1.3029.1.2': ('bf-cbc', 'BF-CBC'),
'1.3.6.1.4.1.11129.2.4.2': ('CT Precertificate SCTs', 'ct_precert_scts'),
'1.3.6.1.4.1.11129.2.4.3': ('CT Precertificate Poison', 'ct_precert_poison'),
'1.3.6.1.4.1.11129.2.4.4': ('CT Precertificate Signer', 'ct_precert_signer'),
'1.3.6.1.4.1.11129.2.4.5': ('CT Certificate SCTs', 'ct_cert_scts'),
'1.3.6.1.4.1.11591.4.11': ('scrypt', 'id-scrypt'),
'1.3.6.1.5': ('Security', 'security'),
'1.3.6.1.5.2.3': ('id-pkinit', ),
'1.3.6.1.5.2.3.4': ('PKINIT Client Auth', 'pkInitClientAuth'),
'1.3.6.1.5.2.3.5': ('Signing KDC Response', 'pkInitKDC'),
'1.3.6.1.5.5.7': ('PKIX', ),
'1.3.6.1.5.5.7.0': ('id-pkix-mod', ),
'1.3.6.1.5.5.7.0.1': ('id-pkix1-explicit-88', ),
'1.3.6.1.5.5.7.0.2': ('id-pkix1-implicit-88', ),
'1.3.6.1.5.5.7.0.3': ('id-pkix1-explicit-93', ),
'1.3.6.1.5.5.7.0.4': ('id-pkix1-implicit-93', ),
'1.3.6.1.5.5.7.0.5': ('id-mod-crmf', ),
'1.3.6.1.5.5.7.0.6': ('id-mod-cmc', ),
'1.3.6.1.5.5.7.0.7': ('id-mod-kea-profile-88', ),
'1.3.6.1.5.5.7.0.8': ('id-mod-kea-profile-93', ),
'1.3.6.1.5.5.7.0.9': ('id-mod-cmp', ),
'1.3.6.1.5.5.7.0.10': ('id-mod-qualified-cert-88', ),
'1.3.6.1.5.5.7.0.11': ('id-mod-qualified-cert-93', ),
'1.3.6.1.5.5.7.0.12': ('id-mod-attribute-cert', ),
'1.3.6.1.5.5.7.0.13': ('id-mod-timestamp-protocol', ),
'1.3.6.1.5.5.7.0.14': ('id-mod-ocsp', ),
'1.3.6.1.5.5.7.0.15': ('id-mod-dvcs', ),
'1.3.6.1.5.5.7.0.16': ('id-mod-cmp2000', ),
'1.3.6.1.5.5.7.1': ('id-pe', ),
'1.3.6.1.5.5.7.1.1': ('Authority Information Access', 'authorityInfoAccess'),
'1.3.6.1.5.5.7.1.2': ('Biometric Info', 'biometricInfo'),
'1.3.6.1.5.5.7.1.3': ('qcStatements', ),
'1.3.6.1.5.5.7.1.4': ('ac-auditEntity', ),
'1.3.6.1.5.5.7.1.5': ('ac-targeting', ),
'1.3.6.1.5.5.7.1.6': ('aaControls', ),
'1.3.6.1.5.5.7.1.7': ('sbgp-ipAddrBlock', ),
'1.3.6.1.5.5.7.1.8': ('sbgp-autonomousSysNum', ),
'1.3.6.1.5.5.7.1.9': ('sbgp-routerIdentifier', ),
'1.3.6.1.5.5.7.1.10': ('ac-proxying', ),
'1.3.6.1.5.5.7.1.11': ('Subject Information Access', 'subjectInfoAccess'),
'1.3.6.1.5.5.7.1.14': ('Proxy Certificate Information', 'proxyCertInfo'),
'1.3.6.1.5.5.7.1.24': ('TLS Feature', 'tlsfeature'),
'1.3.6.1.5.5.7.2': ('id-qt', ),
'1.3.6.1.5.5.7.2.1': ('Policy Qualifier CPS', 'id-qt-cps'),
'1.3.6.1.5.5.7.2.2': ('Policy Qualifier User Notice', 'id-qt-unotice'),
'1.3.6.1.5.5.7.2.3': ('textNotice', ),
'1.3.6.1.5.5.7.3': ('id-kp', ),
'1.3.6.1.5.5.7.3.1': ('TLS Web Server Authentication', 'serverAuth'),
'1.3.6.1.5.5.7.3.2': ('TLS Web Client Authentication', 'clientAuth'),
'1.3.6.1.5.5.7.3.3': ('Code Signing', 'codeSigning'),
'1.3.6.1.5.5.7.3.4': ('E-mail Protection', 'emailProtection'),
'1.3.6.1.5.5.7.3.5': ('IPSec End System', 'ipsecEndSystem'),
'1.3.6.1.5.5.7.3.6': ('IPSec Tunnel', 'ipsecTunnel'),
'1.3.6.1.5.5.7.3.7': ('IPSec User', 'ipsecUser'),
'1.3.6.1.5.5.7.3.8': ('Time Stamping', 'timeStamping'),
'1.3.6.1.5.5.7.3.9': ('OCSP Signing', 'OCSPSigning'),
'1.3.6.1.5.5.7.3.10': ('dvcs', 'DVCS'),
'1.3.6.1.5.5.7.3.17': ('ipsec Internet Key Exchange', 'ipsecIKE'),
'1.3.6.1.5.5.7.3.18': ('Ctrl/provision WAP Access', 'capwapAC'),
'1.3.6.1.5.5.7.3.19': ('Ctrl/Provision WAP Termination', 'capwapWTP'),
'1.3.6.1.5.5.7.3.21': ('SSH Client', 'secureShellClient'),
'1.3.6.1.5.5.7.3.22': ('SSH Server', 'secureShellServer'),
'1.3.6.1.5.5.7.3.23': ('Send Router', 'sendRouter'),
'1.3.6.1.5.5.7.3.24': ('Send Proxied Router', 'sendProxiedRouter'),
'1.3.6.1.5.5.7.3.25': ('Send Owner', 'sendOwner'),
'1.3.6.1.5.5.7.3.26': ('Send Proxied Owner', 'sendProxiedOwner'),
'1.3.6.1.5.5.7.3.27': ('CMC Certificate Authority', 'cmcCA'),
'1.3.6.1.5.5.7.3.28': ('CMC Registration Authority', 'cmcRA'),
'1.3.6.1.5.5.7.4': ('id-it', ),
'1.3.6.1.5.5.7.4.1': ('id-it-caProtEncCert', ),
'1.3.6.1.5.5.7.4.2': ('id-it-signKeyPairTypes', ),
'1.3.6.1.5.5.7.4.3': ('id-it-encKeyPairTypes', ),
'1.3.6.1.5.5.7.4.4': ('id-it-preferredSymmAlg', ),
'1.3.6.1.5.5.7.4.5': ('id-it-caKeyUpdateInfo', ),
'1.3.6.1.5.5.7.4.6': ('id-it-currentCRL', ),
'1.3.6.1.5.5.7.4.7': ('id-it-unsupportedOIDs', ),
'1.3.6.1.5.5.7.4.8': ('id-it-subscriptionRequest', ),
'1.3.6.1.5.5.7.4.9': ('id-it-subscriptionResponse', ),
'1.3.6.1.5.5.7.4.10': ('id-it-keyPairParamReq', ),
'1.3.6.1.5.5.7.4.11': ('id-it-keyPairParamRep', ),
'1.3.6.1.5.5.7.4.12': ('id-it-revPassphrase', ),
'1.3.6.1.5.5.7.4.13': ('id-it-implicitConfirm', ),
'1.3.6.1.5.5.7.4.14': ('id-it-confirmWaitTime', ),
'1.3.6.1.5.5.7.4.15': ('id-it-origPKIMessage', ),
'1.3.6.1.5.5.7.4.16': ('id-it-suppLangTags', ),
'1.3.6.1.5.5.7.5': ('id-pkip', ),
'1.3.6.1.5.5.7.5.1': ('id-regCtrl', ),
'1.3.6.1.5.5.7.5.1.1': ('id-regCtrl-regToken', ),
'1.3.6.1.5.5.7.5.1.2': ('id-regCtrl-authenticator', ),
'1.3.6.1.5.5.7.5.1.3': ('id-regCtrl-pkiPublicationInfo', ),
'1.3.6.1.5.5.7.5.1.4': ('id-regCtrl-pkiArchiveOptions', ),
'1.3.6.1.5.5.7.5.1.5': ('id-regCtrl-oldCertID', ),
'1.3.6.1.5.5.7.5.1.6': ('id-regCtrl-protocolEncrKey', ),
'1.3.6.1.5.5.7.5.2': ('id-regInfo', ),
'1.3.6.1.5.5.7.5.2.1': ('id-regInfo-utf8Pairs', ),
'1.3.6.1.5.5.7.5.2.2': ('id-regInfo-certReq', ),
'1.3.6.1.5.5.7.6': ('id-alg', ),
'1.3.6.1.5.5.7.6.1': ('id-alg-des40', ),
'1.3.6.1.5.5.7.6.2': ('id-alg-noSignature', ),
'1.3.6.1.5.5.7.6.3': ('id-alg-dh-sig-hmac-sha1', ),
'1.3.6.1.5.5.7.6.4': ('id-alg-dh-pop', ),
'1.3.6.1.5.5.7.7': ('id-cmc', ),
'1.3.6.1.5.5.7.7.1': ('id-cmc-statusInfo', ),
'1.3.6.1.5.5.7.7.2': ('id-cmc-identification', ),
'1.3.6.1.5.5.7.7.3': ('id-cmc-identityProof', ),
'1.3.6.1.5.5.7.7.4': ('id-cmc-dataReturn', ),
'1.3.6.1.5.5.7.7.5': ('id-cmc-transactionId', ),
'1.3.6.1.5.5.7.7.6': ('id-cmc-senderNonce', ),
'1.3.6.1.5.5.7.7.7': ('id-cmc-recipientNonce', ),
'1.3.6.1.5.5.7.7.8': ('id-cmc-addExtensions', ),
'1.3.6.1.5.5.7.7.9': ('id-cmc-encryptedPOP', ),
'1.3.6.1.5.5.7.7.10': ('id-cmc-decryptedPOP', ),
'1.3.6.1.5.5.7.7.11': ('id-cmc-lraPOPWitness', ),
'1.3.6.1.5.5.7.7.15': ('id-cmc-getCert', ),
'1.3.6.1.5.5.7.7.16': ('id-cmc-getCRL', ),
'1.3.6.1.5.5.7.7.17': ('id-cmc-revokeRequest', ),
'1.3.6.1.5.5.7.7.18': ('id-cmc-regInfo', ),
'1.3.6.1.5.5.7.7.19': ('id-cmc-responseInfo', ),
'1.3.6.1.5.5.7.7.21': ('id-cmc-queryPending', ),
'1.3.6.1.5.5.7.7.22': ('id-cmc-popLinkRandom', ),
'1.3.6.1.5.5.7.7.23': ('id-cmc-popLinkWitness', ),
'1.3.6.1.5.5.7.7.24': ('id-cmc-confirmCertAcceptance', ),
'1.3.6.1.5.5.7.8': ('id-on', ),
'1.3.6.1.5.5.7.8.1': ('id-on-personalData', ),
'1.3.6.1.5.5.7.8.3': ('Permanent Identifier', 'id-on-permanentIdentifier'),
'1.3.6.1.5.5.7.9': ('id-pda', ),
'1.3.6.1.5.5.7.9.1': ('id-pda-dateOfBirth', ),
'1.3.6.1.5.5.7.9.2': ('id-pda-placeOfBirth', ),
'1.3.6.1.5.5.7.9.3': ('id-pda-gender', ),
'1.3.6.1.5.5.7.9.4': ('id-pda-countryOfCitizenship', ),
'1.3.6.1.5.5.7.9.5': ('id-pda-countryOfResidence', ),
'1.3.6.1.5.5.7.10': ('id-aca', ),
'1.3.6.1.5.5.7.10.1': ('id-aca-authenticationInfo', ),
'1.3.6.1.5.5.7.10.2': ('id-aca-accessIdentity', ),
'1.3.6.1.5.5.7.10.3': ('id-aca-chargingIdentity', ),
'1.3.6.1.5.5.7.10.4': ('id-aca-group', ),
'1.3.6.1.5.5.7.10.5': ('id-aca-role', ),
'1.3.6.1.5.5.7.10.6': ('id-aca-encAttrs', ),
'1.3.6.1.5.5.7.11': ('id-qcs', ),
'1.3.6.1.5.5.7.11.1': ('id-qcs-pkixQCSyntax-v1', ),
'1.3.6.1.5.5.7.12': ('id-cct', ),
'1.3.6.1.5.5.7.12.1': ('id-cct-crs', ),
'1.3.6.1.5.5.7.12.2': ('id-cct-PKIData', ),
'1.3.6.1.5.5.7.12.3': ('id-cct-PKIResponse', ),
'1.3.6.1.5.5.7.21': ('id-ppl', ),
'1.3.6.1.5.5.7.21.0': ('Any language', 'id-ppl-anyLanguage'),
'1.3.6.1.5.5.7.21.1': ('Inherit all', 'id-ppl-inheritAll'),
'1.3.6.1.5.5.7.21.2': ('Independent', 'id-ppl-independent'),
'1.3.6.1.5.5.7.48': ('id-ad', ),
'1.3.6.1.5.5.7.48.1': ('OCSP', 'OCSP', 'id-pkix-OCSP'),
'1.3.6.1.5.5.7.48.1.1': ('Basic OCSP Response', 'basicOCSPResponse'),
'1.3.6.1.5.5.7.48.1.2': ('OCSP Nonce', 'Nonce'),
'1.3.6.1.5.5.7.48.1.3': ('OCSP CRL ID', 'CrlID'),
'1.3.6.1.5.5.7.48.1.4': ('Acceptable OCSP Responses', 'acceptableResponses'),
'1.3.6.1.5.5.7.48.1.5': ('OCSP No Check', 'noCheck'),
'1.3.6.1.5.5.7.48.1.6': ('OCSP Archive Cutoff', 'archiveCutoff'),
'1.3.6.1.5.5.7.48.1.7': ('OCSP Service Locator', 'serviceLocator'),
'1.3.6.1.5.5.7.48.1.8': ('Extended OCSP Status', 'extendedStatus'),
'1.3.6.1.5.5.7.48.1.9': ('valid', ),
'1.3.6.1.5.5.7.48.1.10': ('path', ),
'1.3.6.1.5.5.7.48.1.11': ('Trust Root', 'trustRoot'),
'1.3.6.1.5.5.7.48.2': ('CA Issuers', 'caIssuers'),
'1.3.6.1.5.5.7.48.3': ('AD Time Stamping', 'ad_timestamping'),
'1.3.6.1.5.5.7.48.4': ('ad dvcs', 'AD_DVCS'),
'1.3.6.1.5.5.7.48.5': ('CA Repository', 'caRepository'),
'1.3.6.1.5.5.8.1.1': ('hmac-md5', 'HMAC-MD5'),
'1.3.6.1.5.5.8.1.2': ('hmac-sha1', 'HMAC-SHA1'),
'1.3.6.1.6': ('SNMPv2', 'snmpv2'),
'1.3.6.1.7': ('Mail', ),
'1.3.6.1.7.1': ('MIME MHS', 'mime-mhs'),
'1.3.6.1.7.1.1': ('mime-mhs-headings', 'mime-mhs-headings'),
'1.3.6.1.7.1.1.1': ('id-hex-partial-message', 'id-hex-partial-message'),
'1.3.6.1.7.1.1.2': ('id-hex-multipart-message', 'id-hex-multipart-message'),
'1.3.6.1.7.1.2': ('mime-mhs-bodies', 'mime-mhs-bodies'),
'1.3.14.3.2': ('algorithm', 'algorithm'),
'1.3.14.3.2.3': ('md5WithRSA', 'RSA-NP-MD5'),
'1.3.14.3.2.6': ('des-ecb', 'DES-ECB'),
'1.3.14.3.2.7': ('des-cbc', 'DES-CBC'),
'1.3.14.3.2.8': ('des-ofb', 'DES-OFB'),
'1.3.14.3.2.9': ('des-cfb', 'DES-CFB'),
'1.3.14.3.2.11': ('rsaSignature', ),
'1.3.14.3.2.12': ('dsaEncryption-old', 'DSA-old'),
'1.3.14.3.2.13': ('dsaWithSHA', 'DSA-SHA'),
'1.3.14.3.2.15': ('shaWithRSAEncryption', 'RSA-SHA'),
'1.3.14.3.2.17': ('des-ede', 'DES-EDE'),
'1.3.14.3.2.18': ('sha', 'SHA'),
'1.3.14.3.2.26': ('sha1', 'SHA1'),
'1.3.14.3.2.27': ('dsaWithSHA1-old', 'DSA-SHA1-old'),
'1.3.14.3.2.29': ('sha1WithRSA', 'RSA-SHA1-2'),
'1.3.36.3.2.1': ('ripemd160', 'RIPEMD160'),
'1.3.36.3.3.1.2': ('ripemd160WithRSA', 'RSA-RIPEMD160'),
'1.3.36.3.3.2.8.1.1.1': ('brainpoolP160r1', ),
'1.3.36.3.3.2.8.1.1.2': ('brainpoolP160t1', ),
'1.3.36.3.3.2.8.1.1.3': ('brainpoolP192r1', ),
'1.3.36.3.3.2.8.1.1.4': ('brainpoolP192t1', ),
'1.3.36.3.3.2.8.1.1.5': ('brainpoolP224r1', ),
'1.3.36.3.3.2.8.1.1.6': ('brainpoolP224t1', ),
'1.3.36.3.3.2.8.1.1.7': ('brainpoolP256r1', ),
'1.3.36.3.3.2.8.1.1.8': ('brainpoolP256t1', ),
'1.3.36.3.3.2.8.1.1.9': ('brainpoolP320r1', ),
'1.3.36.3.3.2.8.1.1.10': ('brainpoolP320t1', ),
'1.3.36.3.3.2.8.1.1.11': ('brainpoolP384r1', ),
'1.3.36.3.3.2.8.1.1.12': ('brainpoolP384t1', ),
'1.3.36.3.3.2.8.1.1.13': ('brainpoolP512r1', ),
'1.3.36.3.3.2.8.1.1.14': ('brainpoolP512t1', ),
'1.3.36.8.3.3': ('Professional Information or basis for Admission', 'x509ExtAdmission'),
'1.3.101.1.4.1': ('Strong Extranet ID', 'SXNetID'),
'1.3.101.110': ('X25519', ),
'1.3.101.111': ('X448', ),
'1.3.101.112': ('ED25519', ),
'1.3.101.113': ('ED448', ),
'1.3.111': ('ieee', ),
'1.3.111.2.1619': ('IEEE Security in Storage Working Group', 'ieee-siswg'),
'1.3.111.2.1619.0.1.1': ('aes-128-xts', 'AES-128-XTS'),
'1.3.111.2.1619.0.1.2': ('aes-256-xts', 'AES-256-XTS'),
'1.3.132': ('certicom-arc', ),
'1.3.132.0': ('secg_ellipticCurve', ),
'1.3.132.0.1': ('sect163k1', ),
'1.3.132.0.2': ('sect163r1', ),
'1.3.132.0.3': ('sect239k1', ),
'1.3.132.0.4': ('sect113r1', ),
'1.3.132.0.5': ('sect113r2', ),
'1.3.132.0.6': ('secp112r1', ),
'1.3.132.0.7': ('secp112r2', ),
'1.3.132.0.8': ('secp160r1', ),
'1.3.132.0.9': ('secp160k1', ),
'1.3.132.0.10': ('secp256k1', ),
'1.3.132.0.15': ('sect163r2', ),
'1.3.132.0.16': ('sect283k1', ),
'1.3.132.0.17': ('sect283r1', ),
'1.3.132.0.22': ('sect131r1', ),
'1.3.132.0.23': ('sect131r2', ),
'1.3.132.0.24': ('sect193r1', ),
'1.3.132.0.25': ('sect193r2', ),
'1.3.132.0.26': ('sect233k1', ),
'1.3.132.0.27': ('sect233r1', ),
'1.3.132.0.28': ('secp128r1', ),
'1.3.132.0.29': ('secp128r2', ),
'1.3.132.0.30': ('secp160r2', ),
'1.3.132.0.31': ('secp192k1', ),
'1.3.132.0.32': ('secp224k1', ),
'1.3.132.0.33': ('secp224r1', ),
'1.3.132.0.34': ('secp384r1', ),
'1.3.132.0.35': ('secp521r1', ),
'1.3.132.0.36': ('sect409k1', ),
'1.3.132.0.37': ('sect409r1', ),
'1.3.132.0.38': ('sect571k1', ),
'1.3.132.0.39': ('sect571r1', ),
'1.3.132.1': ('secg-scheme', ),
'1.3.132.1.11.0': ('dhSinglePass-stdDH-sha224kdf-scheme', ),
'1.3.132.1.11.1': ('dhSinglePass-stdDH-sha256kdf-scheme', ),
'1.3.132.1.11.2': ('dhSinglePass-stdDH-sha384kdf-scheme', ),
'1.3.132.1.11.3': ('dhSinglePass-stdDH-sha512kdf-scheme', ),
'1.3.132.1.14.0': ('dhSinglePass-cofactorDH-sha224kdf-scheme', ),
'1.3.132.1.14.1': ('dhSinglePass-cofactorDH-sha256kdf-scheme', ),
'1.3.132.1.14.2': ('dhSinglePass-cofactorDH-sha384kdf-scheme', ),
'1.3.132.1.14.3': ('dhSinglePass-cofactorDH-sha512kdf-scheme', ),
'1.3.133.16.840.63.0': ('x9-63-scheme', ),
'1.3.133.16.840.63.0.2': ('dhSinglePass-stdDH-sha1kdf-scheme', ),
'1.3.133.16.840.63.0.3': ('dhSinglePass-cofactorDH-sha1kdf-scheme', ),
'2': ('joint-iso-itu-t', 'JOINT-ISO-ITU-T', 'joint-iso-ccitt'),
'2.5': ('directory services (X.500)', 'X500'),
'2.5.1.5': ('Selected Attribute Types', 'selected-attribute-types'),
'2.5.1.5.55': ('clearance', ),
'2.5.4': ('X509', ),
'2.5.4.3': ('commonName', 'CN'),
'2.5.4.4': ('surname', 'SN'),
'2.5.4.5': ('serialNumber', ),
'2.5.4.6': ('countryName', 'C'),
'2.5.4.7': ('localityName', 'L'),
'2.5.4.8': ('stateOrProvinceName', 'ST'),
'2.5.4.9': ('streetAddress', 'street'),
'2.5.4.10': ('organizationName', 'O'),
'2.5.4.11': ('organizationalUnitName', 'OU'),
'2.5.4.12': ('title', 'title'),
'2.5.4.13': ('description', ),
'2.5.4.14': ('searchGuide', ),
'2.5.4.15': ('businessCategory', ),
'2.5.4.16': ('postalAddress', ),
'2.5.4.17': ('postalCode', ),
'2.5.4.18': ('postOfficeBox', ),
'2.5.4.19': ('physicalDeliveryOfficeName', ),
'2.5.4.20': ('telephoneNumber', ),
'2.5.4.21': ('telexNumber', ),
'2.5.4.22': ('teletexTerminalIdentifier', ),
'2.5.4.23': ('facsimileTelephoneNumber', ),
'2.5.4.24': ('x121Address', ),
'2.5.4.25': ('internationaliSDNNumber', ),
'2.5.4.26': ('registeredAddress', ),
'2.5.4.27': ('destinationIndicator', ),
'2.5.4.28': ('preferredDeliveryMethod', ),
'2.5.4.29': ('presentationAddress', ),
'2.5.4.30': ('supportedApplicationContext', ),
'2.5.4.31': ('member', ),
'2.5.4.32': ('owner', ),
'2.5.4.33': ('roleOccupant', ),
'2.5.4.34': ('seeAlso', ),
'2.5.4.35': ('userPassword', ),
'2.5.4.36': ('userCertificate', ),
'2.5.4.37': ('cACertificate', ),
'2.5.4.38': ('authorityRevocationList', ),
'2.5.4.39': ('certificateRevocationList', ),
'2.5.4.40': ('crossCertificatePair', ),
'2.5.4.41': ('name', 'name'),
'2.5.4.42': ('givenName', 'GN'),
'2.5.4.43': ('initials', 'initials'),
'2.5.4.44': ('generationQualifier', ),
'2.5.4.45': ('x500UniqueIdentifier', ),
'2.5.4.46': ('dnQualifier', 'dnQualifier'),
'2.5.4.47': ('enhancedSearchGuide', ),
'2.5.4.48': ('protocolInformation', ),
'2.5.4.49': ('distinguishedName', ),
'2.5.4.50': ('uniqueMember', ),
'2.5.4.51': ('houseIdentifier', ),
'2.5.4.52': ('supportedAlgorithms', ),
'2.5.4.53': ('deltaRevocationList', ),
'2.5.4.54': ('dmdName', ),
'2.5.4.65': ('pseudonym', ),
'2.5.4.72': ('role', 'role'),
'2.5.4.97': ('organizationIdentifier', ),
'2.5.4.98': ('countryCode3c', 'c3'),
'2.5.4.99': ('countryCode3n', 'n3'),
'2.5.4.100': ('dnsName', ),
'2.5.8': ('directory services - algorithms', 'X500algorithms'),
'2.5.8.1.1': ('rsa', 'RSA'),
'2.5.8.3.100': ('mdc2WithRSA', 'RSA-MDC2'),
'2.5.8.3.101': ('mdc2', 'MDC2'),
'2.5.29': ('id-ce', ),
'2.5.29.9': ('X509v3 Subject Directory Attributes', 'subjectDirectoryAttributes'),
'2.5.29.14': ('X509v3 Subject Key Identifier', 'subjectKeyIdentifier'),
'2.5.29.15': ('X509v3 Key Usage', 'keyUsage'),
'2.5.29.16': ('X509v3 Private Key Usage Period', 'privateKeyUsagePeriod'),
'2.5.29.17': ('X509v3 Subject Alternative Name', 'subjectAltName'),
'2.5.29.18': ('X509v3 Issuer Alternative Name', 'issuerAltName'),
'2.5.29.19': ('X509v3 Basic Constraints', 'basicConstraints'),
'2.5.29.20': ('X509v3 CRL Number', 'crlNumber'),
'2.5.29.21': ('X509v3 CRL Reason Code', 'CRLReason'),
'2.5.29.23': ('Hold Instruction Code', 'holdInstructionCode'),
'2.5.29.24': ('Invalidity Date', 'invalidityDate'),
'2.5.29.27': ('X509v3 Delta CRL Indicator', 'deltaCRL'),
'2.5.29.28': ('X509v3 Issuing Distribution Point', 'issuingDistributionPoint'),
'2.5.29.29': ('X509v3 Certificate Issuer', 'certificateIssuer'),
'2.5.29.30': ('X509v3 Name Constraints', 'nameConstraints'),
'2.5.29.31': ('X509v3 CRL Distribution Points', 'crlDistributionPoints'),
'2.5.29.32': ('X509v3 Certificate Policies', 'certificatePolicies'),
'2.5.29.32.0': ('X509v3 Any Policy', 'anyPolicy'),
'2.5.29.33': ('X509v3 Policy Mappings', 'policyMappings'),
'2.5.29.35': ('X509v3 Authority Key Identifier', 'authorityKeyIdentifier'),
'2.5.29.36': ('X509v3 Policy Constraints', 'policyConstraints'),
'2.5.29.37': ('X509v3 Extended Key Usage', 'extendedKeyUsage'),
'2.5.29.37.0': ('Any Extended Key Usage', 'anyExtendedKeyUsage'),
'2.5.29.46': ('X509v3 Freshest CRL', 'freshestCRL'),
'2.5.29.54': ('X509v3 Inhibit Any Policy', 'inhibitAnyPolicy'),
'2.5.29.55': ('X509v3 AC Targeting', 'targetInformation'),
'2.5.29.56': ('X509v3 No Revocation Available', 'noRevAvail'),
'2.16.840.1.101.3': ('csor', ),
'2.16.840.1.101.3.4': ('nistAlgorithms', ),
'2.16.840.1.101.3.4.1': ('aes', ),
'2.16.840.1.101.3.4.1.1': ('aes-128-ecb', 'AES-128-ECB'),
'2.16.840.1.101.3.4.1.2': ('aes-128-cbc', 'AES-128-CBC'),
'2.16.840.1.101.3.4.1.3': ('aes-128-ofb', 'AES-128-OFB'),
'2.16.840.1.101.3.4.1.4': ('aes-128-cfb', 'AES-128-CFB'),
'2.16.840.1.101.3.4.1.5': ('id-aes128-wrap', ),
'2.16.840.1.101.3.4.1.6': ('aes-128-gcm', 'id-aes128-GCM'),
'2.16.840.1.101.3.4.1.7': ('aes-128-ccm', 'id-aes128-CCM'),
'2.16.840.1.101.3.4.1.8': ('id-aes128-wrap-pad', ),
'2.16.840.1.101.3.4.1.21': ('aes-192-ecb', 'AES-192-ECB'),
'2.16.840.1.101.3.4.1.22': ('aes-192-cbc', 'AES-192-CBC'),
'2.16.840.1.101.3.4.1.23': ('aes-192-ofb', 'AES-192-OFB'),
'2.16.840.1.101.3.4.1.24': ('aes-192-cfb', 'AES-192-CFB'),
'2.16.840.1.101.3.4.1.25': ('id-aes192-wrap', ),
'2.16.840.1.101.3.4.1.26': ('aes-192-gcm', 'id-aes192-GCM'),
'2.16.840.1.101.3.4.1.27': ('aes-192-ccm', 'id-aes192-CCM'),
'2.16.840.1.101.3.4.1.28': ('id-aes192-wrap-pad', ),
'2.16.840.1.101.3.4.1.41': ('aes-256-ecb', 'AES-256-ECB'),
'2.16.840.1.101.3.4.1.42': ('aes-256-cbc', 'AES-256-CBC'),
'2.16.840.1.101.3.4.1.43': ('aes-256-ofb', 'AES-256-OFB'),
'2.16.840.1.101.3.4.1.44': ('aes-256-cfb', 'AES-256-CFB'),
'2.16.840.1.101.3.4.1.45': ('id-aes256-wrap', ),
'2.16.840.1.101.3.4.1.46': ('aes-256-gcm', 'id-aes256-GCM'),
'2.16.840.1.101.3.4.1.47': ('aes-256-ccm', 'id-aes256-CCM'),
'2.16.840.1.101.3.4.1.48': ('id-aes256-wrap-pad', ),
'2.16.840.1.101.3.4.2': ('nist_hashalgs', ),
'2.16.840.1.101.3.4.2.1': ('sha256', 'SHA256'),
'2.16.840.1.101.3.4.2.2': ('sha384', 'SHA384'),
'2.16.840.1.101.3.4.2.3': ('sha512', 'SHA512'),
'2.16.840.1.101.3.4.2.4': ('sha224', 'SHA224'),
'2.16.840.1.101.3.4.2.5': ('sha512-224', 'SHA512-224'),
'2.16.840.1.101.3.4.2.6': ('sha512-256', 'SHA512-256'),
'2.16.840.1.101.3.4.2.7': ('sha3-224', 'SHA3-224'),
'2.16.840.1.101.3.4.2.8': ('sha3-256', 'SHA3-256'),
'2.16.840.1.101.3.4.2.9': ('sha3-384', 'SHA3-384'),
'2.16.840.1.101.3.4.2.10': ('sha3-512', 'SHA3-512'),
'2.16.840.1.101.3.4.2.11': ('shake128', 'SHAKE128'),
'2.16.840.1.101.3.4.2.12': ('shake256', 'SHAKE256'),
'2.16.840.1.101.3.4.2.13': ('hmac-sha3-224', 'id-hmacWithSHA3-224'),
'2.16.840.1.101.3.4.2.14': ('hmac-sha3-256', 'id-hmacWithSHA3-256'),
'2.16.840.1.101.3.4.2.15': ('hmac-sha3-384', 'id-hmacWithSHA3-384'),
'2.16.840.1.101.3.4.2.16': ('hmac-sha3-512', 'id-hmacWithSHA3-512'),
'2.16.840.1.101.3.4.3': ('dsa_with_sha2', 'sigAlgs'),
'2.16.840.1.101.3.4.3.1': ('dsa_with_SHA224', ),
'2.16.840.1.101.3.4.3.2': ('dsa_with_SHA256', ),
'2.16.840.1.101.3.4.3.3': ('dsa_with_SHA384', 'id-dsa-with-sha384'),
'2.16.840.1.101.3.4.3.4': ('dsa_with_SHA512', 'id-dsa-with-sha512'),
'2.16.840.1.101.3.4.3.5': ('dsa_with_SHA3-224', 'id-dsa-with-sha3-224'),
'2.16.840.1.101.3.4.3.6': ('dsa_with_SHA3-256', 'id-dsa-with-sha3-256'),
'2.16.840.1.101.3.4.3.7': ('dsa_with_SHA3-384', 'id-dsa-with-sha3-384'),
'2.16.840.1.101.3.4.3.8': ('dsa_with_SHA3-512', 'id-dsa-with-sha3-512'),
'2.16.840.1.101.3.4.3.9': ('ecdsa_with_SHA3-224', 'id-ecdsa-with-sha3-224'),
'2.16.840.1.101.3.4.3.10': ('ecdsa_with_SHA3-256', 'id-ecdsa-with-sha3-256'),
'2.16.840.1.101.3.4.3.11': ('ecdsa_with_SHA3-384', 'id-ecdsa-with-sha3-384'),
'2.16.840.1.101.3.4.3.12': ('ecdsa_with_SHA3-512', 'id-ecdsa-with-sha3-512'),
'2.16.840.1.101.3.4.3.13': ('RSA-SHA3-224', 'id-rsassa-pkcs1-v1_5-with-sha3-224'),
'2.16.840.1.101.3.4.3.14': ('RSA-SHA3-256', 'id-rsassa-pkcs1-v1_5-with-sha3-256'),
'2.16.840.1.101.3.4.3.15': ('RSA-SHA3-384', 'id-rsassa-pkcs1-v1_5-with-sha3-384'),
'2.16.840.1.101.3.4.3.16': ('RSA-SHA3-512', 'id-rsassa-pkcs1-v1_5-with-sha3-512'),
'2.16.840.1.113730': ('Netscape Communications Corp.', 'Netscape'),
'2.16.840.1.113730.1': ('Netscape Certificate Extension', 'nsCertExt'),
'2.16.840.1.113730.1.1': ('Netscape Cert Type', 'nsCertType'),
'2.16.840.1.113730.1.2': ('Netscape Base Url', 'nsBaseUrl'),
'2.16.840.1.113730.1.3': ('Netscape Revocation Url', 'nsRevocationUrl'),
'2.16.840.1.113730.1.4': ('Netscape CA Revocation Url', 'nsCaRevocationUrl'),
'2.16.840.1.113730.1.7': ('Netscape Renewal Url', 'nsRenewalUrl'),
'2.16.840.1.113730.1.8': ('Netscape CA Policy Url', 'nsCaPolicyUrl'),
'2.16.840.1.113730.1.12': ('Netscape SSL Server Name', 'nsSslServerName'),
'2.16.840.1.113730.1.13': ('Netscape Comment', 'nsComment'),
'2.16.840.1.113730.2': ('Netscape Data Type', 'nsDataType'),
'2.16.840.1.113730.2.5': ('Netscape Certificate Sequence', 'nsCertSequence'),
'2.16.840.1.113730.4.1': ('Netscape Server Gated Crypto', 'nsSGC'),
'2.23': ('International Organizations', 'international-organizations'),
'2.23.42': ('Secure Electronic Transactions', 'id-set'),
'2.23.42.0': ('content types', 'set-ctype'),
'2.23.42.0.0': ('setct-PANData', ),
'2.23.42.0.1': ('setct-PANToken', ),
'2.23.42.0.2': ('setct-PANOnly', ),
'2.23.42.0.3': ('setct-OIData', ),
'2.23.42.0.4': ('setct-PI', ),
'2.23.42.0.5': ('setct-PIData', ),
'2.23.42.0.6': ('setct-PIDataUnsigned', ),
'2.23.42.0.7': ('setct-HODInput', ),
'2.23.42.0.8': ('setct-AuthResBaggage', ),
'2.23.42.0.9': ('setct-AuthRevReqBaggage', ),
'2.23.42.0.10': ('setct-AuthRevResBaggage', ),
'2.23.42.0.11': ('setct-CapTokenSeq', ),
'2.23.42.0.12': ('setct-PInitResData', ),
'2.23.42.0.13': ('setct-PI-TBS', ),
'2.23.42.0.14': ('setct-PResData', ),
'2.23.42.0.16': ('setct-AuthReqTBS', ),
'2.23.42.0.17': ('setct-AuthResTBS', ),
'2.23.42.0.18': ('setct-AuthResTBSX', ),
'2.23.42.0.19': ('setct-AuthTokenTBS', ),
'2.23.42.0.20': ('setct-CapTokenData', ),
'2.23.42.0.21': ('setct-CapTokenTBS', ),
'2.23.42.0.22': ('setct-AcqCardCodeMsg', ),
'2.23.42.0.23': ('setct-AuthRevReqTBS', ),
'2.23.42.0.24': ('setct-AuthRevResData', ),
'2.23.42.0.25': ('setct-AuthRevResTBS', ),
'2.23.42.0.26': ('setct-CapReqTBS', ),
'2.23.42.0.27': ('setct-CapReqTBSX', ),
'2.23.42.0.28': ('setct-CapResData', ),
'2.23.42.0.29': ('setct-CapRevReqTBS', ),
'2.23.42.0.30': ('setct-CapRevReqTBSX', ),
'2.23.42.0.31': ('setct-CapRevResData', ),
'2.23.42.0.32': ('setct-CredReqTBS', ),
'2.23.42.0.33': ('setct-CredReqTBSX', ),
'2.23.42.0.34': ('setct-CredResData', ),
'2.23.42.0.35': ('setct-CredRevReqTBS', ),
'2.23.42.0.36': ('setct-CredRevReqTBSX', ),
'2.23.42.0.37': ('setct-CredRevResData', ),
'2.23.42.0.38': ('setct-PCertReqData', ),
'2.23.42.0.39': ('setct-PCertResTBS', ),
'2.23.42.0.40': ('setct-BatchAdminReqData', ),
'2.23.42.0.41': ('setct-BatchAdminResData', ),
'2.23.42.0.42': ('setct-CardCInitResTBS', ),
'2.23.42.0.43': ('setct-MeAqCInitResTBS', ),
'2.23.42.0.44': ('setct-RegFormResTBS', ),
'2.23.42.0.45': ('setct-CertReqData', ),
'2.23.42.0.46': ('setct-CertReqTBS', ),
'2.23.42.0.47': ('setct-CertResData', ),
'2.23.42.0.48': ('setct-CertInqReqTBS', ),
'2.23.42.0.49': ('setct-ErrorTBS', ),
'2.23.42.0.50': ('setct-PIDualSignedTBE', ),
'2.23.42.0.51': ('setct-PIUnsignedTBE', ),
'2.23.42.0.52': ('setct-AuthReqTBE', ),
'2.23.42.0.53': ('setct-AuthResTBE', ),
'2.23.42.0.54': ('setct-AuthResTBEX', ),
'2.23.42.0.55': ('setct-AuthTokenTBE', ),
'2.23.42.0.56': ('setct-CapTokenTBE', ),
'2.23.42.0.57': ('setct-CapTokenTBEX', ),
'2.23.42.0.58': ('setct-AcqCardCodeMsgTBE', ),
'2.23.42.0.59': ('setct-AuthRevReqTBE', ),
'2.23.42.0.60': ('setct-AuthRevResTBE', ),
'2.23.42.0.61': ('setct-AuthRevResTBEB', ),
'2.23.42.0.62': ('setct-CapReqTBE', ),
'2.23.42.0.63': ('setct-CapReqTBEX', ),
'2.23.42.0.64': ('setct-CapResTBE', ),
'2.23.42.0.65': ('setct-CapRevReqTBE', ),
'2.23.42.0.66': ('setct-CapRevReqTBEX', ),
'2.23.42.0.67': ('setct-CapRevResTBE', ),
'2.23.42.0.68': ('setct-CredReqTBE', ),
'2.23.42.0.69': ('setct-CredReqTBEX', ),
'2.23.42.0.70': ('setct-CredResTBE', ),
'2.23.42.0.71': ('setct-CredRevReqTBE', ),
'2.23.42.0.72': ('setct-CredRevReqTBEX', ),
'2.23.42.0.73': ('setct-CredRevResTBE', ),
'2.23.42.0.74': ('setct-BatchAdminReqTBE', ),
'2.23.42.0.75': ('setct-BatchAdminResTBE', ),
'2.23.42.0.76': ('setct-RegFormReqTBE', ),
'2.23.42.0.77': ('setct-CertReqTBE', ),
'2.23.42.0.78': ('setct-CertReqTBEX', ),
'2.23.42.0.79': ('setct-CertResTBE', ),
'2.23.42.0.80': ('setct-CRLNotificationTBS', ),
'2.23.42.0.81': ('setct-CRLNotificationResTBS', ),
'2.23.42.0.82': ('setct-BCIDistributionTBS', ),
'2.23.42.1': ('message extensions', 'set-msgExt'),
'2.23.42.1.1': ('generic cryptogram', 'setext-genCrypt'),
'2.23.42.1.3': ('merchant initiated auth', 'setext-miAuth'),
'2.23.42.1.4': ('setext-pinSecure', ),
'2.23.42.1.5': ('setext-pinAny', ),
'2.23.42.1.7': ('setext-track2', ),
'2.23.42.1.8': ('additional verification', 'setext-cv'),
'2.23.42.3': ('set-attr', ),
'2.23.42.3.0': ('setAttr-Cert', ),
'2.23.42.3.0.0': ('set-rootKeyThumb', ),
'2.23.42.3.0.1': ('set-addPolicy', ),
'2.23.42.3.1': ('payment gateway capabilities', 'setAttr-PGWYcap'),
'2.23.42.3.2': ('setAttr-TokenType', ),
'2.23.42.3.2.1': ('setAttr-Token-EMV', ),
'2.23.42.3.2.2': ('setAttr-Token-B0Prime', ),
'2.23.42.3.3': ('issuer capabilities', 'setAttr-IssCap'),
'2.23.42.3.3.3': ('setAttr-IssCap-CVM', ),
'2.23.42.3.3.3.1': ('generate cryptogram', 'setAttr-GenCryptgrm'),
'2.23.42.3.3.4': ('setAttr-IssCap-T2', ),
'2.23.42.3.3.4.1': ('encrypted track 2', 'setAttr-T2Enc'),
'2.23.42.3.3.4.2': ('cleartext track 2', 'setAttr-T2cleartxt'),
'2.23.42.3.3.5': ('setAttr-IssCap-Sig', ),
'2.23.42.3.3.5.1': ('ICC or token signature', 'setAttr-TokICCsig'),
'2.23.42.3.3.5.2': ('secure device signature', 'setAttr-SecDevSig'),
'2.23.42.5': ('set-policy', ),
'2.23.42.5.0': ('set-policy-root', ),
'2.23.42.7': ('certificate extensions', 'set-certExt'),
'2.23.42.7.0': ('setCext-hashedRoot', ),
'2.23.42.7.1': ('setCext-certType', ),
'2.23.42.7.2': ('setCext-merchData', ),
'2.23.42.7.3': ('setCext-cCertRequired', ),
'2.23.42.7.4': ('setCext-tunneling', ),
'2.23.42.7.5': ('setCext-setExt', ),
'2.23.42.7.6': ('setCext-setQualf', ),
'2.23.42.7.7': ('setCext-PGWYcapabilities', ),
'2.23.42.7.8': ('setCext-TokenIdentifier', ),
'2.23.42.7.9': ('setCext-Track2Data', ),
'2.23.42.7.10': ('setCext-TokenType', ),
'2.23.42.7.11': ('setCext-IssuerCapabilities', ),
'2.23.42.8': ('set-brand', ),
'2.23.42.8.1': ('set-brand-IATA-ATA', ),
'2.23.42.8.4': ('set-brand-Visa', ),
'2.23.42.8.5': ('set-brand-MasterCard', ),
'2.23.42.8.30': ('set-brand-Diners', ),
'2.23.42.8.34': ('set-brand-AmericanExpress', ),
'2.23.42.8.35': ('set-brand-JCB', ),
'2.23.42.8.6011': ('set-brand-Novus', ),
'2.23.43': ('wap', ),
'2.23.43.1': ('wap-wsg', ),
'2.23.43.1.4': ('wap-wsg-idm-ecid', ),
'2.23.43.1.4.1': ('wap-wsg-idm-ecid-wtls1', ),
'2.23.43.1.4.3': ('wap-wsg-idm-ecid-wtls3', ),
'2.23.43.1.4.4': ('wap-wsg-idm-ecid-wtls4', ),
'2.23.43.1.4.5': ('wap-wsg-idm-ecid-wtls5', ),
'2.23.43.1.4.6': ('wap-wsg-idm-ecid-wtls6', ),
'2.23.43.1.4.7': ('wap-wsg-idm-ecid-wtls7', ),
'2.23.43.1.4.8': ('wap-wsg-idm-ecid-wtls8', ),
'2.23.43.1.4.9': ('wap-wsg-idm-ecid-wtls9', ),
'2.23.43.1.4.10': ('wap-wsg-idm-ecid-wtls10', ),
'2.23.43.1.4.11': ('wap-wsg-idm-ecid-wtls11', ),
'2.23.43.1.4.12': ('wap-wsg-idm-ecid-wtls12', ),
}
# #####################################################################################
# #####################################################################################
_OID_LOOKUP = dict()
_NORMALIZE_NAMES = dict()
_NORMALIZE_NAMES_SHORT = dict()
for dotted, names in _OID_MAP.items():
for name in names:
if name in _NORMALIZE_NAMES and _OID_LOOKUP[name] != dotted:
raise AssertionError(
'Name collision during setup: "{0}" for OIDs {1} and {2}'
.format(name, dotted, _OID_LOOKUP[name])
)
_NORMALIZE_NAMES[name] = names[0]
_NORMALIZE_NAMES_SHORT[name] = names[-1]
_OID_LOOKUP[name] = dotted
for alias, original in [('userID', 'userId')]:
if alias in _NORMALIZE_NAMES:
raise AssertionError(
'Name collision during adding aliases: "{0}" (alias for "{1}") is already mapped to OID {2}'
.format(alias, original, _OID_LOOKUP[alias])
)
_NORMALIZE_NAMES[alias] = original
_NORMALIZE_NAMES_SHORT[alias] = _NORMALIZE_NAMES_SHORT[original]
_OID_LOOKUP[alias] = _OID_LOOKUP[original]
def pyopenssl_normalize_name(name, short=False):
nid = OpenSSL._util.lib.OBJ_txt2nid(to_bytes(name))
if nid != 0:
b_name = OpenSSL._util.lib.OBJ_nid2ln(nid)
name = to_text(OpenSSL._util.ffi.string(b_name))
if short:
return _NORMALIZE_NAMES_SHORT.get(name, name)
else:
return _NORMALIZE_NAMES.get(name, name)
# #####################################################################################
# #####################################################################################
# # This excerpt is dual licensed under the terms of the Apache License, Version
# # 2.0, and the BSD License. See the LICENSE file at
# # https://github.com/pyca/cryptography/blob/master/LICENSE for complete details.
# #
# # Adapted from cryptography's hazmat/backends/openssl/decode_asn1.py
# #
# # Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# # Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# #
# # Relevant commits from cryptography project (https://github.com/pyca/cryptography):
# # pyca/cryptography@719d536dd691e84e208534798f2eb4f82aaa2e07
# # pyca/cryptography@5ab6d6a5c05572bd1c75f05baf264a2d0001894a
# # pyca/cryptography@2e776e20eb60378e0af9b7439000d0e80da7c7e3
# # pyca/cryptography@fb309ed24647d1be9e319b61b1f2aa8ebb87b90b
# # pyca/cryptography@2917e460993c475c72d7146c50dc3bbc2414280d
# # pyca/cryptography@3057f91ea9a05fb593825006d87a391286a4d828
# # pyca/cryptography@d607dd7e5bc5c08854ec0c9baff70ba4a35be36f
def _obj2txt(openssl_lib, openssl_ffi, obj):
# Set to 80 on the recommendation of
# https://www.openssl.org/docs/crypto/OBJ_nid2ln.html#return_values
#
# But OIDs longer than this occur in real life (e.g. Active
# Directory makes some very long OIDs). So we need to detect
# and properly handle the case where the default buffer is not
# big enough.
#
buf_len = 80
buf = openssl_ffi.new("char[]", buf_len)
# 'res' is the number of bytes that *would* be written if the
# buffer is large enough. If 'res' > buf_len - 1, we need to
# alloc a big-enough buffer and go again.
res = openssl_lib.OBJ_obj2txt(buf, buf_len, obj, 1)
if res > buf_len - 1: # account for terminating null byte
buf_len = res + 1
buf = openssl_ffi.new("char[]", buf_len)
res = openssl_lib.OBJ_obj2txt(buf, buf_len, obj, 1)
return openssl_ffi.buffer(buf, res)[:].decode()
# #####################################################################################
# #####################################################################################
def cryptography_get_extensions_from_cert(cert):
# Since cryptography won't give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing outselves.
result = dict()
backend = cert._backend
x509_obj = cert._x509
for i in range(backend._lib.X509_get_ext_count(x509_obj)):
ext = backend._lib.X509_get_ext(x509_obj, i)
if ext == backend._ffi.NULL:
continue
crit = backend._lib.X509_EXTENSION_get_critical(ext)
data = backend._lib.X509_EXTENSION_get_data(ext)
backend.openssl_assert(data != backend._ffi.NULL)
der = backend._ffi.buffer(data.data, data.length)[:]
entry = dict(
critical=(crit == 1),
value=base64.b64encode(der),
)
oid = _obj2txt(backend._lib, backend._ffi, backend._lib.X509_EXTENSION_get_object(ext))
result[oid] = entry
return result
def cryptography_get_extensions_from_csr(csr):
# Since cryptography won't give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing outselves.
result = dict()
backend = csr._backend
extensions = backend._lib.X509_REQ_get_extensions(csr._x509_req)
extensions = backend._ffi.gc(
extensions,
lambda ext: backend._lib.sk_X509_EXTENSION_pop_free(
ext,
backend._ffi.addressof(backend._lib._original_lib, "X509_EXTENSION_free")
)
)
for i in range(backend._lib.sk_X509_EXTENSION_num(extensions)):
ext = backend._lib.sk_X509_EXTENSION_value(extensions, i)
if ext == backend._ffi.NULL:
continue
crit = backend._lib.X509_EXTENSION_get_critical(ext)
data = backend._lib.X509_EXTENSION_get_data(ext)
backend.openssl_assert(data != backend._ffi.NULL)
der = backend._ffi.buffer(data.data, data.length)[:]
entry = dict(
critical=(crit == 1),
value=base64.b64encode(der),
)
oid = _obj2txt(backend._lib, backend._ffi, backend._lib.X509_EXTENSION_get_object(ext))
result[oid] = entry
return result
def pyopenssl_get_extensions_from_cert(cert):
# While pyOpenSSL allows us to get an extension's DER value, it won't
# give us the dotted string for an OID. So we have to do some magic to
# get hold of it.
result = dict()
ext_count = cert.get_extension_count()
for i in range(0, ext_count):
ext = cert.get_extension(i)
entry = dict(
critical=bool(ext.get_critical()),
value=base64.b64encode(ext.get_data()),
)
oid = _obj2txt(
OpenSSL._util.lib,
OpenSSL._util.ffi,
OpenSSL._util.lib.X509_EXTENSION_get_object(ext._extension)
)
# This could also be done a bit simpler:
#
# oid = _obj2txt(OpenSSL._util.lib, OpenSSL._util.ffi, OpenSSL._util.lib.OBJ_nid2obj(ext._nid))
#
# Unfortunately this gives the wrong result in case the linked OpenSSL
# doesn't know the OID. That's why we have to get the OID dotted string
# similarly to how cryptography does it.
result[oid] = entry
return result
def pyopenssl_get_extensions_from_csr(csr):
# While pyOpenSSL allows us to get an extension's DER value, it won't
# give us the dotted string for an OID. So we have to do some magic to
# get hold of it.
result = dict()
for ext in csr.get_extensions():
entry = dict(
critical=bool(ext.get_critical()),
value=base64.b64encode(ext.get_data()),
)
oid = _obj2txt(
OpenSSL._util.lib,
OpenSSL._util.ffi,
OpenSSL._util.lib.X509_EXTENSION_get_object(ext._extension)
)
# This could also be done a bit simpler:
#
# oid = _obj2txt(OpenSSL._util.lib, OpenSSL._util.ffi, OpenSSL._util.lib.OBJ_nid2obj(ext._nid))
#
# Unfortunately this gives the wrong result in case the linked OpenSSL
# doesn't know the OID. That's why we have to get the OID dotted string
# similarly to how cryptography does it.
result[oid] = entry
return result
def cryptography_name_to_oid(name):
dotted = _OID_LOOKUP.get(name)
if dotted is None:
raise OpenSSLObjectError('Cannot find OID for "{0}"'.format(name))
return x509.oid.ObjectIdentifier(dotted)
def cryptography_oid_to_name(oid, short=False):
dotted_string = oid.dotted_string
names = _OID_MAP.get(dotted_string)
name = names[0] if names else oid._name
if short:
return _NORMALIZE_NAMES_SHORT.get(name, name)
else:
return _NORMALIZE_NAMES.get(name, name)
def cryptography_get_name(name):
'''
Given a name string, returns a cryptography x509.Name object.
Raises an OpenSSLObjectError if the name is unknown or cannot be parsed.
'''
try:
if name.startswith('DNS:'):
return x509.DNSName(to_text(name[4:]))
if name.startswith('IP:'):
return x509.IPAddress(ipaddress.ip_address(to_text(name[3:])))
if name.startswith('email:'):
return x509.RFC822Name(to_text(name[6:]))
if name.startswith('URI:'):
return x509.UniformResourceIdentifier(to_text(name[4:]))
except Exception as e:
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}": {1}'.format(name, e))
if ':' not in name:
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}" (forgot "DNS:" prefix?)'.format(name))
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}" (potentially unsupported by cryptography backend)'.format(name))
def _get_hex(bytesstr):
if bytesstr is None:
return bytesstr
data = binascii.hexlify(bytesstr)
data = to_text(b':'.join(data[i:i + 2] for i in range(0, len(data), 2)))
return data
def cryptography_decode_name(name):
'''
Given a cryptography x509.Name object, returns a string.
Raises an OpenSSLObjectError if the name is not supported.
'''
if isinstance(name, x509.DNSName):
return 'DNS:{0}'.format(name.value)
if isinstance(name, x509.IPAddress):
return 'IP:{0}'.format(name.value.compressed)
if isinstance(name, x509.RFC822Name):
return 'email:{0}'.format(name.value)
if isinstance(name, x509.UniformResourceIdentifier):
return 'URI:{0}'.format(name.value)
if isinstance(name, x509.DirectoryName):
# FIXME: test
return 'DirName:' + ''.join(['/{0}:{1}'.format(attribute.oid._name, attribute.value) for attribute in name.value])
if isinstance(name, x509.RegisteredID):
# FIXME: test
return 'RegisteredID:{0}'.format(name.value)
if isinstance(name, x509.OtherName):
# FIXME: test
return '{0}:{1}'.format(name.type_id.dotted_string, _get_hex(name.value))
raise OpenSSLObjectError('Cannot decode name "{0}"'.format(name))
def _cryptography_get_keyusage(usage):
'''
Given a key usage identifier string, returns the parameter name used by cryptography's x509.KeyUsage().
Raises an OpenSSLObjectError if the identifier is unknown.
'''
if usage in ('Digital Signature', 'digitalSignature'):
return 'digital_signature'
if usage in ('Non Repudiation', 'nonRepudiation'):
return 'content_commitment'
if usage in ('Key Encipherment', 'keyEncipherment'):
return 'key_encipherment'
if usage in ('Data Encipherment', 'dataEncipherment'):
return 'data_encipherment'
if usage in ('Key Agreement', 'keyAgreement'):
return 'key_agreement'
if usage in ('Certificate Sign', 'keyCertSign'):
return 'key_cert_sign'
if usage in ('CRL Sign', 'cRLSign'):
return 'crl_sign'
if usage in ('Encipher Only', 'encipherOnly'):
return 'encipher_only'
if usage in ('Decipher Only', 'decipherOnly'):
return 'decipher_only'
raise OpenSSLObjectError('Unknown key usage "{0}"'.format(usage))
def cryptography_parse_key_usage_params(usages):
'''
Given a list of key usage identifier strings, returns the parameters for cryptography's x509.KeyUsage().
Raises an OpenSSLObjectError if an identifier is unknown.
'''
params = dict(
digital_signature=False,
content_commitment=False,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=False,
crl_sign=False,
encipher_only=False,
decipher_only=False,
)
for usage in usages:
params[_cryptography_get_keyusage(usage)] = True
return params
def cryptography_get_basic_constraints(constraints):
'''
Given a list of constraints, returns a tuple (ca, path_length).
Raises an OpenSSLObjectError if a constraint is unknown or cannot be parsed.
'''
ca = False
path_length = None
if constraints:
for constraint in constraints:
if constraint.startswith('CA:'):
if constraint == 'CA:TRUE':
ca = True
elif constraint == 'CA:FALSE':
ca = False
else:
raise OpenSSLObjectError('Unknown basic constraint value "{0}" for CA'.format(constraint[3:]))
elif constraint.startswith('pathlen:'):
v = constraint[len('pathlen:'):]
try:
path_length = int(v)
except Exception as e:
raise OpenSSLObjectError('Cannot parse path length constraint "{0}" ({1})'.format(v, e))
else:
raise OpenSSLObjectError('Unknown basic constraint "{0}"'.format(constraint))
return ca, path_length
def binary_exp_mod(f, e, m):
'''Computes f^e mod m in O(log e) multiplications modulo m.'''
# Compute len_e = floor(log_2(e))
len_e = -1
x = e
while x > 0:
x >>= 1
len_e += 1
# Compute f**e mod m
result = 1
for k in range(len_e, -1, -1):
result = (result * result) % m
if ((e >> k) & 1) != 0:
result = (result * f) % m
return result
def simple_gcd(a, b):
'''Compute GCD of its two inputs.'''
while b != 0:
a, b = b, a % b
return a
def quick_is_not_prime(n):
'''Does some quick checks to see if we can poke a hole into the primality of n.
A result of `False` does **not** mean that the number is prime; it just means
that we couldn't detect quickly whether it is not prime.
'''
if n <= 2:
return True
# The constant in the next line is the product of all primes < 200
if simple_gcd(n, 7799922041683461553249199106329813876687996789903550945093032474868511536164700810) > 1:
return True
# TODO: maybe do some iterations of Miller-Rabin to increase confidence
# (https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test)
return False
python_version = (sys.version_info[0], sys.version_info[1])
if python_version >= (2, 7) or python_version >= (3, 1):
# Ansible still supports Python 2.6 on remote nodes
def count_bits(no):
no = abs(no)
if no == 0:
return 0
return no.bit_length()
else:
# Slow, but works
def count_bits(no):
no = abs(no)
count = 0
while no > 0:
no >>= 1
count += 1
return count
PEM_START = '-----BEGIN '
PEM_END = '-----'
PKCS8_PRIVATEKEY_NAMES = ('PRIVATE KEY', 'ENCRYPTED PRIVATE KEY')
PKCS1_PRIVATEKEY_SUFFIX = ' PRIVATE KEY'
def identify_private_key_format(content):
'''Given the contents of a private key file, identifies its format.'''
# See https://github.com/openssl/openssl/blob/master/crypto/pem/pem_pkey.c#L40-L85
# (PEM_read_bio_PrivateKey)
# and https://github.com/openssl/openssl/blob/master/include/openssl/pem.h#L46-L47
# (PEM_STRING_PKCS8, PEM_STRING_PKCS8INF)
try:
lines = content.decode('utf-8').splitlines(False)
if lines[0].startswith(PEM_START) and lines[0].endswith(PEM_END) and len(lines[0]) > len(PEM_START) + len(PEM_END):
name = lines[0][len(PEM_START):-len(PEM_END)]
if name in PKCS8_PRIVATEKEY_NAMES:
return 'pkcs8'
if len(name) > len(PKCS1_PRIVATEKEY_SUFFIX) and name.endswith(PKCS1_PRIVATEKEY_SUFFIX):
return 'pkcs1'
return 'unknown-pem'
except UnicodeDecodeError:
pass
return 'raw'
def cryptography_key_needs_digest_for_signing(key):
'''Tests whether the given private key requires a digest algorithm for signing.
Ed25519 and Ed448 keys do not; they need None to be passed as the digest algorithm.
'''
if CRYPTOGRAPHY_HAS_ED25519 and isinstance(key, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey):
return False
if CRYPTOGRAPHY_HAS_ED448 and isinstance(key, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey):
return False
return True
def cryptography_compare_public_keys(key1, key2):
'''Tests whether two public keys are the same.
Needs special logic for Ed25519 and Ed448 keys, since they do not have public_numbers().
'''
if CRYPTOGRAPHY_HAS_ED25519:
a = isinstance(key1, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PublicKey)
b = isinstance(key2, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PublicKey)
if a or b:
if not a or not b:
return False
a = key1.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
b = key2.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
return a == b
if CRYPTOGRAPHY_HAS_ED448:
a = isinstance(key1, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PublicKey)
b = isinstance(key2, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PublicKey)
if a or b:
if not a or not b:
return False
a = key1.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
b = key2.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
return a == b
return key1.public_numbers() == key2.public_numbers()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,035 |
openssl_publickey always fails with name 'crypto' is not defined
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`openssl_publickey` calls `module_utils.crypto.get_fingerprint()` and cannot tell it what backend to use, and it calls `module_utils.crypto.load_privatekey()` without backend anyways, then default to `pyopenssl` backend and fails.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`openssl_publickey`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /root/ansible_workspace/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
```
It should affects devel.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
irrelevant
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
I'm running Manjaro 18.1.5.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: Reproduction
hosts: localhost
tasks:
- openssl_privatekey:
path: /tmp/test.key
- openssl_publickey:
path: /tmp/test.pub
privatekey_path: /tmp/test.key
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The `openssl_publickey` task should not fail and `/tmp/test.pub` should be created.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.4
config file = /root/ansible_workspace/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.8.1 (default, Jan 22 2020, 06:38:00) [GCC 9.2.0]
Using /root/ansible_workspace/ansible.cfg as config file
host_list declined parsing /root/ansible_workspace/managed_nodes.yml as it did not pass its verify_file() method
script declined parsing /root/ansible_workspace/managed_nodes.yml as it did not pass its verify_file() method
Skipping empty key (hosts) in group (all)
Skipping empty key (hosts) in group (critical_infra_servers)
Skipping empty key (hosts) in group (application_servers)
Skipping empty key (hosts) in group (database_servers)
Parsed /root/ansible_workspace/managed_nodes.yml inventory source with yaml plugin
PLAYBOOK: testground.yml ******************************************************************************
1 plays in testground.yml
PLAY [Reproduction] ***********************************************************************************
META: ran handlers
TASK [openssl_privatekey] *****************************************************************************
task path: /root/ansible_workspace/testground.yml:6
Sunday 02 February 2020 23:49:44 +0800 (0:00:00.018) 0:00:00.018 *******
Using module file /usr/lib/python3.8/site-packages/ansible/modules/crypto/openssl_privatekey.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
changed: [localhost] => {
"changed": true,
"filename": "/tmp/test.key",
"fingerprint": {
"blake2b": "06:c9:0a:b4:12:e7:13:cc:aa:6a:9a:22:00:6b:c8:48:06:a1:4d:5d:df:0e:ed:10:d5:3a:23:7f:4b:6e:45:b7:1e:b0:b1:13:f6:95:46:6f:67:54:c2:07:fd:10:f1:7c:8f:84:91:96:6b:5a:44:cf:2e:e1:c3:36:78:b4:b1:db",
"blake2s": "f9:b2:ab:8f:32:4f:5d:91:2b:89:dc:da:07:89:b8:41:cd:59:5f:ac:1d:3b:e3:d8:42:5b:ee:3d:a1:87:84:4b",
"md5": "3d:b2:37:ff:11:0d:26:c3:35:9e:3a:67:66:5b:77:ac",
"sha1": "11:1f:aa:0b:4b:58:44:2a:85:e6:29:10:96:6c:44:7f:f4:f9:a2:4b",
"sha224": "a4:60:9f:fb:cd:e9:7e:b4:bb:54:84:03:70:d6:0c:39:cb:9d:cb:77:8a:c8:b7:fe:97:f7:ad:11",
"sha256": "b4:92:9a:ac:a6:84:5b:a6:31:e4:11:fe:5c:29:09:76:4c:7f:29:34:fa:a2:89:c5:25:d3:08:69:07:54:2d:69",
"sha384": "51:41:bd:08:d5:fa:2d:c1:3f:d8:69:e8:b9:36:fc:9e:68:f0:92:b3:c6:a4:f2:f1:9f:80:f4:66:e8:ad:47:f5:8d:57:ca:b4:71:b5:6d:ed:8c:f7:01:11:a6:68:27:96",
"sha3_224": "5d:73:c9:b6:80:a4:6f:0a:60:6a:8a:c9:b8:af:9e:4f:18:ca:cb:85:35:44:b4:1d:65:a3:51:4f",
"sha3_256": "a6:ac:fb:5c:8a:a8:b9:1c:c0:99:05:15:20:03:9f:ce:a8:42:03:80:75:50:aa:5d:4c:8e:0e:0e:a4:d0:6d:27",
"sha3_384": "9f:46:2a:b7:6c:14:68:37:ad:c0:12:ae:9c:a9:6a:ab:34:86:06:02:15:a5:10:57:9f:2b:78:b5:69:af:d9:f9:81:33:d2:67:58:08:00:84:8b:50:9f:76:45:ab:51:e3",
"sha3_512": "b1:3c:df:1e:27:0c:b3:b0:55:3e:cd:42:d2:67:ce:58:02:39:ac:8d:38:11:bf:74:e6:0a:84:c1:fd:4c:a5:01:74:f1:5a:3d:4b:8c:7e:98:b7:6a:18:5a:e5:98:04:a7:b6:5d:9a:4e:93:88:85:80:4f:9b:8c:35:b8:55:f6:c6",
"sha512": "cd:3a:f3:ed:dd:86:28:75:2a:8a:c5:65:88:f3:b0:8b:c5:c3:d3:b9:3d:a5:5d:78:1a:04:cb:dd:0b:58:a5:4d:9a:02:37:a4:e5:4b:ce:f3:4f:54:11:98:93:f3:dd:67:ac:ef:04:06:17:2d:a5:08:09:1a:19:12:cc:1f:56:63",
"shake_128": "f7:de:e4:52:c2:65:c0:e6:c8:7a:f9:35:d5:63:94:59:1d:c1:c5:52:b1:3e:8a:2a:dc:5a:2a:57:df:cc:32:d0",
"shake_256": "ba:d5:37:e1:78:23:f3:39:ed:be:e0:d8:f3:c1:75:a5:28:fe:b2:e1:2b:17:1d:8c:7f:04:2c:0a:2a:5e:ae:c4"
},
"invocation": {
"module_args": {
"attributes": null,
"backup": false,
"cipher": null,
"content": null,
"curve": null,
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": false,
"group": null,
"mode": "0600",
"owner": null,
"passphrase": null,
"path": "/tmp/test.key",
"regexp": null,
"remote_src": null,
"select_crypto_backend": "auto",
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"size": 4096,
"src": null,
"state": "present",
"type": "RSA",
"unsafe_writes": null
}
},
"size": 4096,
"type": "RSA"
}
TASK [openssl_publickey] ******************************************************************************
task path: /root/ansible_workspace/testground.yml:9
Sunday 02 February 2020 23:49:45 +0800 (0:00:00.824) 0:00:00.843 *******
Using module file /usr/lib/python3.8/site-packages/ansible/modules/crypto/openssl_publickey.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 209, in load_privatekey
NameError: name 'crypto' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.8/runpy.py", line 206, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 432, in <module>
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 416, in main
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/modules/crypto/openssl_publickey.py", line 266, in generate
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 171, in get_fingerprint
File "/tmp/ansible_openssl_publickey_payload__lythb6u/ansible_openssl_publickey_payload.zip/ansible/module_utils/crypto.py", line 212, in load_privatekey
NameError: name 'crypto' is not defined
fatal: [localhost]: FAILED! => {
"changed": false,
"module_stderr": "*snip*",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ********************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Sunday 02 February 2020 23:49:45 +0800 (0:00:00.270) 0:00:01.113 *******
===============================================================================
openssl_privatekey ----------------------------------------------------------------------------- 0.82s
/root/ansible_workspace/testground.yml:6 -------------------------------------------------------------
openssl_publickey ------------------------------------------------------------------------------ 0.27s
/root/ansible_workspace/testground.yml:9 -------------------------------------------------------------
```
|
https://github.com/ansible/ansible/issues/67035
|
https://github.com/ansible/ansible/pull/67036
|
b1a8bded3fe769244b16525dadcd19c2007b80c7
|
a0e5e2e4c597c8cf0fdd39c2df45fe33fd38eedb
| 2020-02-02T15:55:25Z |
python
| 2020-02-03T05:18:19Z |
lib/ansible/modules/crypto/openssl_publickey.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Yanis Guenane <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: openssl_publickey
version_added: "2.3"
short_description: Generate an OpenSSL public key from its private key.
description:
- This module allows one to (re)generate OpenSSL public keys from their private keys.
- Keys are generated in PEM or OpenSSH format.
- The module can use the cryptography Python library, or the pyOpenSSL Python
library. By default, it tries to detect which one is available. This can be
overridden with the I(select_crypto_backend) option. When I(format) is C(OpenSSH),
the C(cryptography) backend has to be used. Please note that the PyOpenSSL backend
was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
requirements:
- Either cryptography >= 1.2.3 (older versions might work as well)
- Or pyOpenSSL >= 16.0.0
- Needs cryptography >= 1.4 if I(format) is C(OpenSSH)
author:
- Yanis Guenane (@Spredzy)
- Felix Fontein (@felixfontein)
options:
state:
description:
- Whether the public key should exist or not, taking action if the state is different from what is stated.
type: str
default: present
choices: [ absent, present ]
force:
description:
- Should the key be regenerated even it it already exists.
type: bool
default: no
format:
description:
- The format of the public key.
type: str
default: PEM
choices: [ OpenSSH, PEM ]
version_added: "2.4"
path:
description:
- Name of the file in which the generated TLS/SSL public key will be written.
type: path
required: true
privatekey_path:
description:
- Path to the TLS/SSL private key from which to generate the public key.
- Either I(privatekey_path) or I(privatekey_content) must be specified, but not both.
If I(state) is C(present), one of them is required.
type: path
privatekey_content:
description:
- The content of the TLS/SSL private key from which to generate the public key.
- Either I(privatekey_path) or I(privatekey_content) must be specified, but not both.
If I(state) is C(present), one of them is required.
type: str
version_added: "2.10"
privatekey_passphrase:
description:
- The passphrase for the private key.
type: str
version_added: "2.4"
backup:
description:
- Create a backup file including a timestamp so you can get the original
public key back if you overwrote it with a different one by accident.
type: bool
default: no
version_added: "2.8"
select_crypto_backend:
description:
- Determines which crypto backend to use.
- The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
type: str
default: auto
choices: [ auto, cryptography, pyopenssl ]
version_added: "2.9"
return_content:
description:
- If set to C(yes), will return the (current or generated) public key's content as I(publickey).
type: bool
default: no
version_added: "2.10"
extends_documentation_fragment:
- files
seealso:
- module: openssl_certificate
- module: openssl_csr
- module: openssl_dhparam
- module: openssl_pkcs12
- module: openssl_privatekey
'''
EXAMPLES = r'''
- name: Generate an OpenSSL public key in PEM format
openssl_publickey:
path: /etc/ssl/public/ansible.com.pem
privatekey_path: /etc/ssl/private/ansible.com.pem
- name: Generate an OpenSSL public key in PEM format from an inline key
openssl_publickey:
path: /etc/ssl/public/ansible.com.pem
privatekey_content: "{{ private_key_content }}"
- name: Generate an OpenSSL public key in OpenSSH v2 format
openssl_publickey:
path: /etc/ssl/public/ansible.com.pem
privatekey_path: /etc/ssl/private/ansible.com.pem
format: OpenSSH
- name: Generate an OpenSSL public key with a passphrase protected private key
openssl_publickey:
path: /etc/ssl/public/ansible.com.pem
privatekey_path: /etc/ssl/private/ansible.com.pem
privatekey_passphrase: ansible
- name: Force regenerate an OpenSSL public key if it already exists
openssl_publickey:
path: /etc/ssl/public/ansible.com.pem
privatekey_path: /etc/ssl/private/ansible.com.pem
force: yes
- name: Remove an OpenSSL public key
openssl_publickey:
path: /etc/ssl/public/ansible.com.pem
state: absent
'''
RETURN = r'''
privatekey:
description:
- Path to the TLS/SSL private key the public key was generated from.
- Will be C(none) if the private key has been provided in I(privatekey_content).
returned: changed or success
type: str
sample: /etc/ssl/private/ansible.com.pem
format:
description: The format of the public key (PEM, OpenSSH, ...).
returned: changed or success
type: str
sample: PEM
filename:
description: Path to the generated TLS/SSL public key file.
returned: changed or success
type: str
sample: /etc/ssl/public/ansible.com.pem
fingerprint:
description:
- The fingerprint of the public key. Fingerprint will be generated for each hashlib.algorithms available.
- Requires PyOpenSSL >= 16.0 for meaningful output.
returned: changed or success
type: dict
sample:
md5: "84:75:71:72:8d:04:b5:6c:4d:37:6d:66:83:f5:4c:29"
sha1: "51:cc:7c:68:5d:eb:41:43:88:7e:1a:ae:c7:f8:24:72:ee:71:f6:10"
sha224: "b1:19:a6:6c:14:ac:33:1d:ed:18:50:d3:06:5c:b2:32:91:f1:f1:52:8c:cb:d5:75:e9:f5:9b:46"
sha256: "41:ab:c7:cb:d5:5f:30:60:46:99:ac:d4:00:70:cf:a1:76:4f:24:5d:10:24:57:5d:51:6e:09:97:df:2f:de:c7"
sha384: "85:39:50:4e:de:d9:19:33:40:70:ae:10:ab:59:24:19:51:c3:a2:e4:0b:1c:b1:6e:dd:b3:0c:d9:9e:6a:46:af:da:18:f8:ef:ae:2e:c0:9a:75:2c:9b:b3:0f:3a:5f:3d"
sha512: "fd:ed:5e:39:48:5f:9f:fe:7f:25:06:3f:79:08:cd:ee:a5:e7:b3:3d:13:82:87:1f:84:e1:f5:c7:28:77:53:94:86:56:38:69:f0:d9:35:22:01:1e:a6:60:...:0f:9b"
backup_file:
description: Name of backup file created.
returned: changed and if I(backup) is C(yes)
type: str
sample: /path/to/publickey.pem.2019-03-09@11:22~
publickey:
description: The (current or generated) public key's content.
returned: if I(state) is C(present) and I(return_content) is C(yes)
type: str
version_added: "2.10"
'''
import os
import traceback
from distutils.version import LooseVersion
MINIMAL_PYOPENSSL_VERSION = '16.0.0'
MINIMAL_CRYPTOGRAPHY_VERSION = '1.2.3'
MINIMAL_CRYPTOGRAPHY_VERSION_OPENSSH = '1.4'
PYOPENSSL_IMP_ERR = None
try:
import OpenSSL
from OpenSSL import crypto
PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
except ImportError:
PYOPENSSL_IMP_ERR = traceback.format_exc()
PYOPENSSL_FOUND = False
else:
PYOPENSSL_FOUND = True
CRYPTOGRAPHY_IMP_ERR = None
try:
import cryptography
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization as crypto_serialization
CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
except ImportError:
CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
CRYPTOGRAPHY_FOUND = False
else:
CRYPTOGRAPHY_FOUND = True
from ansible.module_utils import crypto as crypto_utils
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class PublicKeyError(crypto_utils.OpenSSLObjectError):
pass
class PublicKey(crypto_utils.OpenSSLObject):
def __init__(self, module, backend):
super(PublicKey, self).__init__(
module.params['path'],
module.params['state'],
module.params['force'],
module.check_mode
)
self.format = module.params['format']
self.privatekey_path = module.params['privatekey_path']
self.privatekey_content = module.params['privatekey_content']
if self.privatekey_content is not None:
self.privatekey_content = self.privatekey_content.encode('utf-8')
self.privatekey_passphrase = module.params['privatekey_passphrase']
self.privatekey = None
self.publickey_bytes = None
self.return_content = module.params['return_content']
self.fingerprint = {}
self.backend = backend
self.backup = module.params['backup']
self.backup_file = None
def _create_publickey(self, module):
self.privatekey = crypto_utils.load_privatekey(
path=self.privatekey_path,
content=self.privatekey_content,
passphrase=self.privatekey_passphrase,
backend=self.backend
)
if self.backend == 'cryptography':
if self.format == 'OpenSSH':
return self.privatekey.public_key().public_bytes(
crypto_serialization.Encoding.OpenSSH,
crypto_serialization.PublicFormat.OpenSSH
)
else:
return self.privatekey.public_key().public_bytes(
crypto_serialization.Encoding.PEM,
crypto_serialization.PublicFormat.SubjectPublicKeyInfo
)
else:
try:
return crypto.dump_publickey(crypto.FILETYPE_PEM, self.privatekey)
except AttributeError as dummy:
raise PublicKeyError('You need to have PyOpenSSL>=16.0.0 to generate public keys')
def generate(self, module):
"""Generate the public key."""
if self.privatekey_content is None and not os.path.exists(self.privatekey_path):
raise PublicKeyError(
'The private key %s does not exist' % self.privatekey_path
)
if not self.check(module, perms_required=False) or self.force:
try:
publickey_content = self._create_publickey(module)
if self.return_content:
self.publickey_bytes = publickey_content
if self.backup:
self.backup_file = module.backup_local(self.path)
crypto_utils.write_file(module, publickey_content)
self.changed = True
except crypto_utils.OpenSSLBadPassphraseError as exc:
raise PublicKeyError(exc)
except (IOError, OSError) as exc:
raise PublicKeyError(exc)
self.fingerprint = crypto_utils.get_fingerprint(
path=self.privatekey_path,
content=self.privatekey_content,
passphrase=self.privatekey_passphrase
)
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False):
self.changed = True
def check(self, module, perms_required=True):
"""Ensure the resource is in its desired state."""
state_and_perms = super(PublicKey, self).check(module, perms_required)
def _check_privatekey():
if self.privatekey_content is None and not os.path.exists(self.privatekey_path):
return False
try:
with open(self.path, 'rb') as public_key_fh:
publickey_content = public_key_fh.read()
if self.return_content:
self.publickey_bytes = publickey_content
if self.backend == 'cryptography':
if self.format == 'OpenSSH':
# Read and dump public key. Makes sure that the comment is stripped off.
current_publickey = crypto_serialization.load_ssh_public_key(publickey_content, backend=default_backend())
publickey_content = current_publickey.public_bytes(
crypto_serialization.Encoding.OpenSSH,
crypto_serialization.PublicFormat.OpenSSH
)
else:
current_publickey = crypto_serialization.load_pem_public_key(publickey_content, backend=default_backend())
publickey_content = current_publickey.public_bytes(
crypto_serialization.Encoding.PEM,
crypto_serialization.PublicFormat.SubjectPublicKeyInfo
)
else:
publickey_content = crypto.dump_publickey(
crypto.FILETYPE_PEM,
crypto.load_publickey(crypto.FILETYPE_PEM, publickey_content)
)
except Exception as dummy:
return False
try:
desired_publickey = self._create_publickey(module)
except crypto_utils.OpenSSLBadPassphraseError as exc:
raise PublicKeyError(exc)
return publickey_content == desired_publickey
if not state_and_perms:
return state_and_perms
return _check_privatekey()
def remove(self, module):
if self.backup:
self.backup_file = module.backup_local(self.path)
super(PublicKey, self).remove(module)
def dump(self):
"""Serialize the object into a dictionary."""
result = {
'privatekey': self.privatekey_path,
'filename': self.path,
'format': self.format,
'changed': self.changed,
'fingerprint': self.fingerprint,
}
if self.backup_file:
result['backup_file'] = self.backup_file
if self.return_content:
if self.publickey_bytes is None:
self.publickey_bytes = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
result['publickey'] = self.publickey_bytes.decode('utf-8') if self.publickey_bytes else None
return result
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['present', 'absent']),
force=dict(type='bool', default=False),
path=dict(type='path', required=True),
privatekey_path=dict(type='path'),
privatekey_content=dict(type='str'),
format=dict(type='str', default='PEM', choices=['OpenSSH', 'PEM']),
privatekey_passphrase=dict(type='str', no_log=True),
backup=dict(type='bool', default=False),
select_crypto_backend=dict(type='str', choices=['auto', 'pyopenssl', 'cryptography'], default='auto'),
return_content=dict(type='bool', default=False),
),
supports_check_mode=True,
add_file_common_args=True,
required_if=[('state', 'present', ['privatekey_path', 'privatekey_content'], True)],
mutually_exclusive=(
['privatekey_path', 'privatekey_content'],
),
)
minimal_cryptography_version = MINIMAL_CRYPTOGRAPHY_VERSION
if module.params['format'] == 'OpenSSH':
minimal_cryptography_version = MINIMAL_CRYPTOGRAPHY_VERSION_OPENSSH
backend = module.params['select_crypto_backend']
if backend == 'auto':
# Detection what is possible
can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(minimal_cryptography_version)
can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
# Decision
if can_use_cryptography:
backend = 'cryptography'
elif can_use_pyopenssl:
if module.params['format'] == 'OpenSSH':
module.fail_json(
msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION_OPENSSH)),
exception=CRYPTOGRAPHY_IMP_ERR
)
backend = 'pyopenssl'
# Success?
if backend == 'auto':
module.fail_json(msg=("Can't detect any of the required Python libraries "
"cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
minimal_cryptography_version,
MINIMAL_PYOPENSSL_VERSION))
if module.params['format'] == 'OpenSSH' and backend != 'cryptography':
module.fail_json(msg="Format OpenSSH requires the cryptography backend.")
if backend == 'pyopenssl':
if not PYOPENSSL_FOUND:
module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
exception=PYOPENSSL_IMP_ERR)
module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated', version='2.13')
elif backend == 'cryptography':
if not CRYPTOGRAPHY_FOUND:
module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(minimal_cryptography_version)),
exception=CRYPTOGRAPHY_IMP_ERR)
base_dir = os.path.dirname(module.params['path']) or '.'
if not os.path.isdir(base_dir):
module.fail_json(
name=base_dir,
msg="The directory '%s' does not exist or the file is not a directory" % base_dir
)
try:
public_key = PublicKey(module, backend)
if public_key.state == 'present':
if module.check_mode:
result = public_key.dump()
result['changed'] = module.params['force'] or not public_key.check(module)
module.exit_json(**result)
public_key.generate(module)
else:
if module.check_mode:
result = public_key.dump()
result['changed'] = os.path.exists(module.params['path'])
module.exit_json(**result)
public_key.remove(module)
result = public_key.dump()
module.exit_json(**result)
except crypto_utils.OpenSSLObjectError as exc:
module.fail_json(msg=to_native(exc))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,610 |
ec2 and ec2_instance module overlap
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible currently contains two modules for managing AWS EC2 instances: [`ec2`](https://docs.ansible.com/ansible/latest/modules/ec2_module.html) and [`ec2_instance`](https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html).
There is considerable overlap between the modules, but it's unclear what the differences are and why there are two modules in the first place.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ec2
ec2_instance
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/ansible/lib/ansible
executable location = /home/user/ansible/bin/ansible
python version = 3.7.5 (default, Oct 27 2019, 15:43:29) [GCC 9.2.1 20191022]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
RETRY_FILES_ENABLED(/home/user/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian bullseye/sid
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
N/A
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
N/A
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
N/A
```
|
https://github.com/ansible/ansible/issues/64610
|
https://github.com/ansible/ansible/pull/67009
|
fad261b04f569ce12a5af5e2e09d08da919c7a8a
|
f49408287a96329542ba71958afe0f47363e4c28
| 2019-11-08T16:12:28Z |
python
| 2020-02-03T17:12:13Z |
lib/ansible/modules/cloud/amazon/ec2.py
|
#!/usr/bin/python
# This file is part of Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = '''
---
module: ec2
short_description: create, terminate, start or stop an instance in ec2
description:
- Creates or terminates ec2 instances.
version_added: "0.9"
options:
key_name:
description:
- Key pair to use on the instance.
- The SSH key must already exist in AWS in order to use this argument.
- Keys can be created / deleted using the M(ec2_key) module.
aliases: ['keypair']
type: str
id:
version_added: "1.1"
description:
- Identifier for this instance or set of instances, so that the module will be idempotent with respect to EC2 instances.
- This identifier is valid for at least 24 hours after the termination of the instance, and should not be reused for another call later on.
- For details, see the description of client token at U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html).
type: str
group:
description:
- Security group (or list of groups) to use with the instance.
aliases: [ 'groups' ]
type: list
elements: str
group_id:
version_added: "1.1"
description:
- Security group id (or list of ids) to use with the instance.
type: list
elements: str
zone:
version_added: "1.2"
description:
- AWS availability zone in which to launch the instance.
aliases: [ 'aws_zone', 'ec2_zone' ]
type: str
instance_type:
description:
- Instance type to use for the instance, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html).
- Required when creating a new instance.
type: str
aliases: ['type']
tenancy:
version_added: "1.9"
description:
- An instance with a tenancy of C(dedicated) runs on single-tenant hardware and can only be launched into a VPC.
- Note that to use dedicated tenancy you MUST specify a I(vpc_subnet_id) as well.
- Dedicated tenancy is not available for EC2 "micro" instances.
default: default
choices: [ "default", "dedicated" ]
type: str
spot_price:
version_added: "1.5"
description:
- Maximum spot price to bid. If not set, a regular on-demand instance is requested.
- A spot request is made with this maximum bid. When it is filled, the instance is started.
type: str
spot_type:
version_added: "2.0"
description:
- The type of spot request.
- After being interrupted a C(persistent) spot instance will be started once there is capacity to fill the request again.
default: "one-time"
choices: [ "one-time", "persistent" ]
type: str
image:
description:
- I(ami) ID to use for the instance.
- Required when I(state=present).
type: str
kernel:
description:
- Kernel eki to use for the instance.
type: str
ramdisk:
description:
- Ramdisk eri to use for the instance.
type: str
wait:
description:
- Wait for the instance to reach its desired state before returning.
- Does not wait for SSH, see the 'wait_for_connection' example for details.
type: bool
default: false
wait_timeout:
description:
- How long before wait gives up, in seconds.
default: 300
type: int
spot_wait_timeout:
version_added: "1.5"
description:
- How long to wait for the spot instance request to be fulfilled. Affects 'Request valid until' for setting spot request lifespan.
default: 600
type: int
count:
description:
- Number of instances to launch.
default: 1
type: int
monitoring:
version_added: "1.1"
description:
- Enable detailed monitoring (CloudWatch) for instance.
type: bool
default: false
user_data:
version_added: "0.9"
description:
- Opaque blob of data which is made available to the EC2 instance.
type: str
instance_tags:
version_added: "1.0"
description:
- A hash/dictionary of tags to add to the new instance or for starting/stopping instance by tag; '{"key":"value"}' and '{"key":"value","key":"value"}'.
type: dict
placement_group:
version_added: "1.3"
description:
- Placement group for the instance when using EC2 Clustered Compute.
type: str
vpc_subnet_id:
version_added: "1.1"
description:
- the subnet ID in which to launch the instance (VPC).
type: str
assign_public_ip:
version_added: "1.5"
description:
- When provisioning within vpc, assign a public IP address. Boto library must be 2.13.0+.
type: bool
private_ip:
version_added: "1.2"
description:
- The private ip address to assign the instance (from the vpc subnet).
type: str
instance_profile_name:
version_added: "1.3"
description:
- Name of the IAM instance profile (i.e. what the EC2 console refers to as an "IAM Role") to use. Boto library must be 2.5.0+.
type: str
instance_ids:
version_added: "1.3"
description:
- "list of instance ids, currently used for states: absent, running, stopped"
aliases: ['instance_id']
type: list
elements: str
source_dest_check:
version_added: "1.6"
description:
- Enable or Disable the Source/Destination checks (for NAT instances and Virtual Routers).
When initially creating an instance the EC2 API defaults this to C(True).
type: bool
termination_protection:
version_added: "2.0"
description:
- Enable or Disable the Termination Protection.
type: bool
default: false
instance_initiated_shutdown_behavior:
version_added: "2.2"
description:
- Set whether AWS will Stop or Terminate an instance on shutdown. This parameter is ignored when using instance-store.
images (which require termination on shutdown).
default: 'stop'
choices: [ "stop", "terminate" ]
type: str
state:
version_added: "1.3"
description:
- Create, terminate, start, stop or restart instances. The state 'restarted' was added in Ansible 2.2.
- When I(state=absent), I(instance_ids) is required.
- When I(state=running), I(state=stopped) or I(state=restarted) then either I(instance_ids) or I(instance_tags) is required.
default: 'present'
choices: ['absent', 'present', 'restarted', 'running', 'stopped']
type: str
volumes:
version_added: "1.5"
description:
- A list of hash/dictionaries of volumes to add to the new instance.
type: list
elements: dict
suboptions:
device_name:
type: str
required: true
description:
- A name for the device (For example C(/dev/sda)).
delete_on_termination:
type: bool
default: false
description:
- Whether the volume should be automatically deleted when the instance is terminated.
ephemeral:
type: str
description:
- Whether the volume should be ephemeral.
- Data on ephemeral volumes is lost when the instance is stopped.
- Mutually exclusive with the I(snapshot) parameter.
encrypted:
type: bool
default: false
description:
- Whether the volume should be encrypted using the 'aws/ebs' KMS CMK.
snapshot:
type: str
description:
- The ID of an EBS snapshot to copy when creating the volume.
- Mutually exclusive with the I(ephemeral) parameter.
volume_type:
type: str
description:
- The type of volume to create.
- See U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) for more information on the available volume types.
volume_size:
type: int
description:
- The size of the volume (in GiB).
iops:
type: int
description:
- The number of IOPS per second to provision for the volume.
- Required when I(volume_type=io1).
ebs_optimized:
version_added: "1.6"
description:
- Whether instance is using optimized EBS volumes, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
default: false
type: bool
exact_count:
version_added: "1.5"
description:
- An integer value which indicates how many instances that match the 'count_tag' parameter should be running.
Instances are either created or terminated based on this value.
type: int
count_tag:
version_added: "1.5"
description:
- Used with I(exact_count) to determine how many nodes based on a specific tag criteria should be running.
This can be expressed in multiple ways and is shown in the EXAMPLES section. For instance, one can request 25 servers
that are tagged with "class=webserver". The specified tag must already exist or be passed in as the I(instance_tags) option.
type: raw
network_interfaces:
version_added: "2.0"
description:
- A list of existing network interfaces to attach to the instance at launch. When specifying existing network interfaces,
none of the I(assign_public_ip), I(private_ip), I(vpc_subnet_id), I(group), or I(group_id) parameters may be used. (Those parameters are
for creating a new network interface at launch.)
aliases: ['network_interface']
type: list
elements: str
spot_launch_group:
version_added: "2.1"
description:
- Launch group for spot requests, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-spot-instances-work.html#spot-launch-group).
type: str
author:
- "Tim Gerla (@tgerla)"
- "Lester Wade (@lwade)"
- "Seth Vidal (@skvidal)"
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Basic provisioning example
- ec2:
key_name: mykey
instance_type: t2.micro
image: ami-123456
wait: yes
group: webserver
count: 3
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
# Advanced example with tagging and CloudWatch
- ec2:
key_name: mykey
group: databases
instance_type: t2.micro
image: ami-123456
wait: yes
wait_timeout: 500
count: 5
instance_tags:
db: postgres
monitoring: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
# Single instance with additional IOPS volume from snapshot and volume delete on termination
- ec2:
key_name: mykey
group: webserver
instance_type: c3.medium
image: ami-123456
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/sdb
snapshot: snap-abcdef12
volume_type: io1
iops: 1000
volume_size: 100
delete_on_termination: true
monitoring: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
# Single instance with ssd gp2 root volume
- ec2:
key_name: mykey
group: webserver
instance_type: c3.medium
image: ami-123456
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 8
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
count_tag:
Name: dbserver
exact_count: 1
# Multiple groups example
- ec2:
key_name: mykey
group: ['databases', 'internal-services', 'sshable', 'and-so-forth']
instance_type: m1.large
image: ami-6e649707
wait: yes
wait_timeout: 500
count: 5
instance_tags:
db: postgres
monitoring: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
# Multiple instances with additional volume from snapshot
- ec2:
key_name: mykey
group: webserver
instance_type: m1.large
image: ami-6e649707
wait: yes
wait_timeout: 500
count: 5
volumes:
- device_name: /dev/sdb
snapshot: snap-abcdef12
volume_size: 10
monitoring: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
# Dedicated tenancy example
- local_action:
module: ec2
assign_public_ip: yes
group_id: sg-1dc53f72
key_name: mykey
image: ami-6e649707
instance_type: m1.small
tenancy: dedicated
vpc_subnet_id: subnet-29e63245
wait: yes
# Spot instance example
- ec2:
spot_price: 0.24
spot_wait_timeout: 600
keypair: mykey
group_id: sg-1dc53f72
instance_type: m1.small
image: ami-6e649707
wait: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
spot_launch_group: report_generators
instance_initiated_shutdown_behavior: terminate
# Examples using pre-existing network interfaces
- ec2:
key_name: mykey
instance_type: t2.small
image: ami-f005ba11
network_interface: eni-deadbeef
- ec2:
key_name: mykey
instance_type: t2.small
image: ami-f005ba11
network_interfaces: ['eni-deadbeef', 'eni-5ca1ab1e']
# Launch instances, runs some tasks
# and then terminate them
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: my_keypair
instance_type: m1.small
security_group: my_securitygroup
image: my_ami_id
region: us-east-1
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
loop: "{{ ec2.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_dns_name }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ ec2.instances }}"
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
roles:
- my_awesome_role
- my_awesome_test
- name: Terminate instances
hosts: localhost
tasks:
- name: Terminate instances that were previously launched
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
# Start a few existing instances, run some tasks
# and stop the instances
- name: Start sandbox instances
hosts: localhost
gather_facts: false
vars:
instance_ids:
- 'i-xxxxxx'
- 'i-xxxxxx'
- 'i-xxxxxx'
region: us-east-1
tasks:
- name: Start the sandbox instances
ec2:
instance_ids: '{{ instance_ids }}'
region: '{{ region }}'
state: running
wait: True
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
roles:
- do_neat_stuff
- do_more_neat_stuff
- name: Stop sandbox instances
hosts: localhost
gather_facts: false
vars:
instance_ids:
- 'i-xxxxxx'
- 'i-xxxxxx'
- 'i-xxxxxx'
region: us-east-1
tasks:
- name: Stop the sandbox instances
ec2:
instance_ids: '{{ instance_ids }}'
region: '{{ region }}'
state: stopped
wait: True
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
#
# Start stopped instances specified by tag
#
- local_action:
module: ec2
instance_tags:
Name: ExtraPower
state: running
#
# Restart instances specified by tag
#
- local_action:
module: ec2
instance_tags:
Name: ExtraPower
state: restarted
#
# Enforce that 5 instances with a tag "foo" are running
# (Highly recommended!)
#
- ec2:
key_name: mykey
instance_type: c1.medium
image: ami-40603AD1
wait: yes
group: webserver
instance_tags:
foo: bar
exact_count: 5
count_tag: foo
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
#
# Enforce that 5 running instances named "database" with a "dbtype" of "postgres"
#
- ec2:
key_name: mykey
instance_type: c1.medium
image: ami-40603AD1
wait: yes
group: webserver
instance_tags:
Name: database
dbtype: postgres
exact_count: 5
count_tag:
Name: database
dbtype: postgres
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
#
# count_tag complex argument examples
#
# instances with tag foo
- ec2:
count_tag:
foo:
# instances with tag foo=bar
- ec2:
count_tag:
foo: bar
# instances with tags foo=bar & baz
- ec2:
count_tag:
foo: bar
baz:
# instances with tags foo & bar & baz=bang
- ec2:
count_tag:
- foo
- bar
- baz: bang
'''
import time
import datetime
import traceback
from ast import literal_eval
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import get_aws_connection_info, ec2_argument_spec, ec2_connect
from ansible.module_utils.six import get_function_code, string_types
from ansible.module_utils._text import to_bytes, to_text
try:
import boto.ec2
from boto.ec2.blockdevicemapping import BlockDeviceType, BlockDeviceMapping
from boto.exception import EC2ResponseError
from boto import connect_ec2_endpoint
from boto import connect_vpc
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
def find_running_instances_by_count_tag(module, ec2, vpc, count_tag, zone=None):
# get reservations for instances that match tag(s) and are in the desired state
state = module.params.get('state')
if state not in ['running', 'stopped']:
state = None
reservations = get_reservations(module, ec2, vpc, tags=count_tag, state=state, zone=zone)
instances = []
for res in reservations:
if hasattr(res, 'instances'):
for inst in res.instances:
if inst.state == 'terminated' or inst.state == 'shutting-down':
continue
instances.append(inst)
return reservations, instances
def _set_none_to_blank(dictionary):
result = dictionary
for k in result:
if isinstance(result[k], dict):
result[k] = _set_none_to_blank(result[k])
elif not result[k]:
result[k] = ""
return result
def get_reservations(module, ec2, vpc, tags=None, state=None, zone=None):
# TODO: filters do not work with tags that have underscores
filters = dict()
vpc_subnet_id = module.params.get('vpc_subnet_id')
vpc_id = None
if vpc_subnet_id:
filters.update({"subnet-id": vpc_subnet_id})
if vpc:
vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id
if vpc_id:
filters.update({"vpc-id": vpc_id})
if tags is not None:
if isinstance(tags, str):
try:
tags = literal_eval(tags)
except Exception:
pass
# if not a string type, convert and make sure it's a text string
if isinstance(tags, int):
tags = to_text(tags)
# if string, we only care that a tag of that name exists
if isinstance(tags, str):
filters.update({"tag-key": tags})
# if list, append each item to filters
if isinstance(tags, list):
for x in tags:
if isinstance(x, dict):
x = _set_none_to_blank(x)
filters.update(dict(("tag:" + tn, tv) for (tn, tv) in x.items()))
else:
filters.update({"tag-key": x})
# if dict, add the key and value to the filter
if isinstance(tags, dict):
tags = _set_none_to_blank(tags)
filters.update(dict(("tag:" + tn, tv) for (tn, tv) in tags.items()))
# lets check to see if the filters dict is empty, if so then stop
if not filters:
module.fail_json(msg="Filters based on tag is empty => tags: %s" % (tags))
if state:
# http://stackoverflow.com/questions/437511/what-are-the-valid-instancestates-for-the-amazon-ec2-api
filters.update({'instance-state-name': state})
if zone:
filters.update({'availability-zone': zone})
if module.params.get('id'):
filters['client-token'] = module.params['id']
results = ec2.get_all_instances(filters=filters)
return results
def get_instance_info(inst):
"""
Retrieves instance information from an instance
ID and returns it as a dictionary
"""
instance_info = {'id': inst.id,
'ami_launch_index': inst.ami_launch_index,
'private_ip': inst.private_ip_address,
'private_dns_name': inst.private_dns_name,
'public_ip': inst.ip_address,
'dns_name': inst.dns_name,
'public_dns_name': inst.public_dns_name,
'state_code': inst.state_code,
'architecture': inst.architecture,
'image_id': inst.image_id,
'key_name': inst.key_name,
'placement': inst.placement,
'region': inst.placement[:-1],
'kernel': inst.kernel,
'ramdisk': inst.ramdisk,
'launch_time': inst.launch_time,
'instance_type': inst.instance_type,
'root_device_type': inst.root_device_type,
'root_device_name': inst.root_device_name,
'state': inst.state,
'hypervisor': inst.hypervisor,
'tags': inst.tags,
'groups': dict((group.id, group.name) for group in inst.groups),
}
try:
instance_info['virtualization_type'] = getattr(inst, 'virtualization_type')
except AttributeError:
instance_info['virtualization_type'] = None
try:
instance_info['ebs_optimized'] = getattr(inst, 'ebs_optimized')
except AttributeError:
instance_info['ebs_optimized'] = False
try:
bdm_dict = {}
bdm = getattr(inst, 'block_device_mapping')
for device_name in bdm.keys():
bdm_dict[device_name] = {
'status': bdm[device_name].status,
'volume_id': bdm[device_name].volume_id,
'delete_on_termination': bdm[device_name].delete_on_termination
}
instance_info['block_device_mapping'] = bdm_dict
except AttributeError:
instance_info['block_device_mapping'] = False
try:
instance_info['tenancy'] = getattr(inst, 'placement_tenancy')
except AttributeError:
instance_info['tenancy'] = 'default'
return instance_info
def boto_supports_associate_public_ip_address(ec2):
"""
Check if Boto library has associate_public_ip_address in the NetworkInterfaceSpecification
class. Added in Boto 2.13.0
ec2: authenticated ec2 connection object
Returns:
True if Boto library accepts associate_public_ip_address argument, else false
"""
try:
network_interface = boto.ec2.networkinterface.NetworkInterfaceSpecification()
getattr(network_interface, "associate_public_ip_address")
return True
except AttributeError:
return False
def boto_supports_profile_name_arg(ec2):
"""
Check if Boto library has instance_profile_name argument. instance_profile_name has been added in Boto 2.5.0
ec2: authenticated ec2 connection object
Returns:
True if Boto library accept instance_profile_name argument, else false
"""
run_instances_method = getattr(ec2, 'run_instances')
return 'instance_profile_name' in get_function_code(run_instances_method).co_varnames
def boto_supports_volume_encryption():
"""
Check if Boto library supports encryption of EBS volumes (added in 2.29.0)
Returns:
True if boto library has the named param as an argument on the request_spot_instances method, else False
"""
return hasattr(boto, 'Version') and LooseVersion(boto.Version) >= LooseVersion('2.29.0')
def create_block_device(module, ec2, volume):
# Not aware of a way to determine this programatically
# http://aws.amazon.com/about-aws/whats-new/2013/10/09/ebs-provisioned-iops-maximum-iops-gb-ratio-increased-to-30-1/
MAX_IOPS_TO_SIZE_RATIO = 30
volume_type = volume.get('volume_type')
if 'snapshot' not in volume and 'ephemeral' not in volume:
if 'volume_size' not in volume:
module.fail_json(msg='Size must be specified when creating a new volume or modifying the root volume')
if 'snapshot' in volume:
if volume_type == 'io1' and 'iops' not in volume:
module.fail_json(msg='io1 volumes must have an iops value set')
if 'iops' in volume:
snapshot = ec2.get_all_snapshots(snapshot_ids=[volume['snapshot']])[0]
size = volume.get('volume_size', snapshot.volume_size)
if int(volume['iops']) > MAX_IOPS_TO_SIZE_RATIO * size:
module.fail_json(msg='IOPS must be at most %d times greater than size' % MAX_IOPS_TO_SIZE_RATIO)
if 'ephemeral' in volume:
if 'snapshot' in volume:
module.fail_json(msg='Cannot set both ephemeral and snapshot')
if boto_supports_volume_encryption():
return BlockDeviceType(snapshot_id=volume.get('snapshot'),
ephemeral_name=volume.get('ephemeral'),
size=volume.get('volume_size'),
volume_type=volume_type,
delete_on_termination=volume.get('delete_on_termination', False),
iops=volume.get('iops'),
encrypted=volume.get('encrypted', None))
else:
return BlockDeviceType(snapshot_id=volume.get('snapshot'),
ephemeral_name=volume.get('ephemeral'),
size=volume.get('volume_size'),
volume_type=volume_type,
delete_on_termination=volume.get('delete_on_termination', False),
iops=volume.get('iops'))
def boto_supports_param_in_spot_request(ec2, param):
"""
Check if Boto library has a <param> in its request_spot_instances() method. For example, the placement_group parameter wasn't added until 2.3.0.
ec2: authenticated ec2 connection object
Returns:
True if boto library has the named param as an argument on the request_spot_instances method, else False
"""
method = getattr(ec2, 'request_spot_instances')
return param in get_function_code(method).co_varnames
def await_spot_requests(module, ec2, spot_requests, count):
"""
Wait for a group of spot requests to be fulfilled, or fail.
module: Ansible module object
ec2: authenticated ec2 connection object
spot_requests: boto.ec2.spotinstancerequest.SpotInstanceRequest object returned by ec2.request_spot_instances
count: Total number of instances to be created by the spot requests
Returns:
list of instance ID's created by the spot request(s)
"""
spot_wait_timeout = int(module.params.get('spot_wait_timeout'))
wait_complete = time.time() + spot_wait_timeout
spot_req_inst_ids = dict()
while time.time() < wait_complete:
reqs = ec2.get_all_spot_instance_requests()
for sirb in spot_requests:
if sirb.id in spot_req_inst_ids:
continue
for sir in reqs:
if sir.id != sirb.id:
continue # this is not our spot instance
if sir.instance_id is not None:
spot_req_inst_ids[sirb.id] = sir.instance_id
elif sir.state == 'open':
continue # still waiting, nothing to do here
elif sir.state == 'active':
continue # Instance is created already, nothing to do here
elif sir.state == 'failed':
module.fail_json(msg="Spot instance request %s failed with status %s and fault %s:%s" % (
sir.id, sir.status.code, sir.fault.code, sir.fault.message))
elif sir.state == 'cancelled':
module.fail_json(msg="Spot instance request %s was cancelled before it could be fulfilled." % sir.id)
elif sir.state == 'closed':
# instance is terminating or marked for termination
# this may be intentional on the part of the operator,
# or it may have been terminated by AWS due to capacity,
# price, or group constraints in this case, we'll fail
# the module if the reason for the state is anything
# other than termination by user. Codes are documented at
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html
if sir.status.code == 'instance-terminated-by-user':
# do nothing, since the user likely did this on purpose
pass
else:
spot_msg = "Spot instance request %s was closed by AWS with the status %s and fault %s:%s"
module.fail_json(msg=spot_msg % (sir.id, sir.status.code, sir.fault.code, sir.fault.message))
if len(spot_req_inst_ids) < count:
time.sleep(5)
else:
return list(spot_req_inst_ids.values())
module.fail_json(msg="wait for spot requests timeout on %s" % time.asctime())
def enforce_count(module, ec2, vpc):
exact_count = module.params.get('exact_count')
count_tag = module.params.get('count_tag')
zone = module.params.get('zone')
# fail here if the exact count was specified without filtering
# on a tag, as this may lead to a undesired removal of instances
if exact_count and count_tag is None:
module.fail_json(msg="you must use the 'count_tag' option with exact_count")
reservations, instances = find_running_instances_by_count_tag(module, ec2, vpc, count_tag, zone)
changed = None
checkmode = False
instance_dict_array = []
changed_instance_ids = None
if len(instances) == exact_count:
changed = False
elif len(instances) < exact_count:
changed = True
to_create = exact_count - len(instances)
if not checkmode:
(instance_dict_array, changed_instance_ids, changed) \
= create_instances(module, ec2, vpc, override_count=to_create)
for inst in instance_dict_array:
instances.append(inst)
elif len(instances) > exact_count:
changed = True
to_remove = len(instances) - exact_count
if not checkmode:
all_instance_ids = sorted([x.id for x in instances])
remove_ids = all_instance_ids[0:to_remove]
instances = [x for x in instances if x.id not in remove_ids]
(changed, instance_dict_array, changed_instance_ids) \
= terminate_instances(module, ec2, remove_ids)
terminated_list = []
for inst in instance_dict_array:
inst['state'] = "terminated"
terminated_list.append(inst)
instance_dict_array = terminated_list
# ensure all instances are dictionaries
all_instances = []
for inst in instances:
if not isinstance(inst, dict):
warn_if_public_ip_assignment_changed(module, inst)
inst = get_instance_info(inst)
all_instances.append(inst)
return (all_instances, instance_dict_array, changed_instance_ids, changed)
def create_instances(module, ec2, vpc, override_count=None):
"""
Creates new instances
module : AnsibleModule object
ec2: authenticated ec2 connection object
Returns:
A list of dictionaries with instance information
about the instances that were launched
"""
key_name = module.params.get('key_name')
id = module.params.get('id')
group_name = module.params.get('group')
group_id = module.params.get('group_id')
zone = module.params.get('zone')
instance_type = module.params.get('instance_type')
tenancy = module.params.get('tenancy')
spot_price = module.params.get('spot_price')
spot_type = module.params.get('spot_type')
image = module.params.get('image')
if override_count:
count = override_count
else:
count = module.params.get('count')
monitoring = module.params.get('monitoring')
kernel = module.params.get('kernel')
ramdisk = module.params.get('ramdisk')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
spot_wait_timeout = int(module.params.get('spot_wait_timeout'))
placement_group = module.params.get('placement_group')
user_data = module.params.get('user_data')
instance_tags = module.params.get('instance_tags')
vpc_subnet_id = module.params.get('vpc_subnet_id')
assign_public_ip = module.boolean(module.params.get('assign_public_ip'))
private_ip = module.params.get('private_ip')
instance_profile_name = module.params.get('instance_profile_name')
volumes = module.params.get('volumes')
ebs_optimized = module.params.get('ebs_optimized')
exact_count = module.params.get('exact_count')
count_tag = module.params.get('count_tag')
source_dest_check = module.boolean(module.params.get('source_dest_check'))
termination_protection = module.boolean(module.params.get('termination_protection'))
network_interfaces = module.params.get('network_interfaces')
spot_launch_group = module.params.get('spot_launch_group')
instance_initiated_shutdown_behavior = module.params.get('instance_initiated_shutdown_behavior')
vpc_id = None
if vpc_subnet_id:
if not vpc:
module.fail_json(msg="region must be specified")
else:
vpc_id = vpc.get_all_subnets(subnet_ids=[vpc_subnet_id])[0].vpc_id
else:
vpc_id = None
try:
# Here we try to lookup the group id from the security group name - if group is set.
if group_name:
if vpc_id:
grp_details = ec2.get_all_security_groups(filters={'vpc_id': vpc_id})
else:
grp_details = ec2.get_all_security_groups()
if isinstance(group_name, string_types):
group_name = [group_name]
unmatched = set(group_name).difference(str(grp.name) for grp in grp_details)
if len(unmatched) > 0:
module.fail_json(msg="The following group names are not valid: %s" % ', '.join(unmatched))
group_id = [str(grp.id) for grp in grp_details if str(grp.name) in group_name]
# Now we try to lookup the group id testing if group exists.
elif group_id:
# wrap the group_id in a list if it's not one already
if isinstance(group_id, string_types):
group_id = [group_id]
grp_details = ec2.get_all_security_groups(group_ids=group_id)
group_name = [grp_item.name for grp_item in grp_details]
except boto.exception.NoAuthHandlerFound as e:
module.fail_json(msg=str(e))
# Lookup any instances that much our run id.
running_instances = []
count_remaining = int(count)
if id is not None:
filter_dict = {'client-token': id, 'instance-state-name': 'running'}
previous_reservations = ec2.get_all_instances(None, filter_dict)
for res in previous_reservations:
for prev_instance in res.instances:
running_instances.append(prev_instance)
count_remaining = count_remaining - len(running_instances)
# Both min_count and max_count equal count parameter. This means the launch request is explicit (we want count, or fail) in how many instances we want.
if count_remaining == 0:
changed = False
else:
changed = True
try:
params = {'image_id': image,
'key_name': key_name,
'monitoring_enabled': monitoring,
'placement': zone,
'instance_type': instance_type,
'kernel_id': kernel,
'ramdisk_id': ramdisk}
if user_data is not None:
params['user_data'] = to_bytes(user_data, errors='surrogate_or_strict')
if ebs_optimized:
params['ebs_optimized'] = ebs_optimized
# 'tenancy' always has a default value, but it is not a valid parameter for spot instance request
if not spot_price:
params['tenancy'] = tenancy
if boto_supports_profile_name_arg(ec2):
params['instance_profile_name'] = instance_profile_name
else:
if instance_profile_name is not None:
module.fail_json(
msg="instance_profile_name parameter requires Boto version 2.5.0 or higher")
if assign_public_ip is not None:
if not boto_supports_associate_public_ip_address(ec2):
module.fail_json(
msg="assign_public_ip parameter requires Boto version 2.13.0 or higher.")
elif not vpc_subnet_id:
module.fail_json(
msg="assign_public_ip only available with vpc_subnet_id")
else:
if private_ip:
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(
subnet_id=vpc_subnet_id,
private_ip_address=private_ip,
groups=group_id,
associate_public_ip_address=assign_public_ip)
else:
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(
subnet_id=vpc_subnet_id,
groups=group_id,
associate_public_ip_address=assign_public_ip)
interfaces = boto.ec2.networkinterface.NetworkInterfaceCollection(interface)
params['network_interfaces'] = interfaces
else:
if network_interfaces:
if isinstance(network_interfaces, string_types):
network_interfaces = [network_interfaces]
interfaces = []
for i, network_interface_id in enumerate(network_interfaces):
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(
network_interface_id=network_interface_id,
device_index=i)
interfaces.append(interface)
params['network_interfaces'] = \
boto.ec2.networkinterface.NetworkInterfaceCollection(*interfaces)
else:
params['subnet_id'] = vpc_subnet_id
if vpc_subnet_id:
params['security_group_ids'] = group_id
else:
params['security_groups'] = group_name
if volumes:
bdm = BlockDeviceMapping()
for volume in volumes:
if 'device_name' not in volume:
module.fail_json(msg='Device name must be set for volume')
# Minimum volume size is 1GiB. We'll use volume size explicitly set to 0
# to be a signal not to create this volume
if 'volume_size' not in volume or int(volume['volume_size']) > 0:
bdm[volume['device_name']] = create_block_device(module, ec2, volume)
params['block_device_map'] = bdm
# check to see if we're using spot pricing first before starting instances
if not spot_price:
if assign_public_ip is not None and private_ip:
params.update(
dict(
min_count=count_remaining,
max_count=count_remaining,
client_token=id,
placement_group=placement_group,
)
)
else:
params.update(
dict(
min_count=count_remaining,
max_count=count_remaining,
client_token=id,
placement_group=placement_group,
private_ip_address=private_ip,
)
)
# For ordinary (not spot) instances, we can select 'stop'
# (the default) or 'terminate' here.
params['instance_initiated_shutdown_behavior'] = instance_initiated_shutdown_behavior or 'stop'
try:
res = ec2.run_instances(**params)
except boto.exception.EC2ResponseError as e:
if (params['instance_initiated_shutdown_behavior'] != 'terminate' and
"InvalidParameterCombination" == e.error_code):
params['instance_initiated_shutdown_behavior'] = 'terminate'
res = ec2.run_instances(**params)
else:
raise
instids = [i.id for i in res.instances]
while True:
try:
ec2.get_all_instances(instids)
break
except boto.exception.EC2ResponseError as e:
if "<Code>InvalidInstanceID.NotFound</Code>" in str(e):
# there's a race between start and get an instance
continue
else:
module.fail_json(msg=str(e))
# The instances returned through ec2.run_instances above can be in
# terminated state due to idempotency. See commit 7f11c3d for a complete
# explanation.
terminated_instances = [
str(instance.id) for instance in res.instances if instance.state == 'terminated'
]
if terminated_instances:
module.fail_json(msg="Instances with id(s) %s " % terminated_instances +
"were created previously but have since been terminated - " +
"use a (possibly different) 'instanceid' parameter")
else:
if private_ip:
module.fail_json(
msg='private_ip only available with on-demand (non-spot) instances')
if boto_supports_param_in_spot_request(ec2, 'placement_group'):
params['placement_group'] = placement_group
elif placement_group:
module.fail_json(
msg="placement_group parameter requires Boto version 2.3.0 or higher.")
# You can't tell spot instances to 'stop'; they will always be
# 'terminate'd. For convenience, we'll ignore the latter value.
if instance_initiated_shutdown_behavior and instance_initiated_shutdown_behavior != 'terminate':
module.fail_json(
msg="instance_initiated_shutdown_behavior=stop is not supported for spot instances.")
if spot_launch_group and isinstance(spot_launch_group, string_types):
params['launch_group'] = spot_launch_group
params.update(dict(
count=count_remaining,
type=spot_type,
))
# Set spot ValidUntil
# ValidUntil -> (timestamp). The end date of the request, in
# UTC format (for example, YYYY -MM -DD T*HH* :MM :SS Z).
utc_valid_until = (
datetime.datetime.utcnow()
+ datetime.timedelta(seconds=spot_wait_timeout))
params['valid_until'] = utc_valid_until.strftime('%Y-%m-%dT%H:%M:%S.000Z')
res = ec2.request_spot_instances(spot_price, **params)
# Now we have to do the intermediate waiting
if wait:
instids = await_spot_requests(module, ec2, res, count)
else:
instids = []
except boto.exception.BotoServerError as e:
module.fail_json(msg="Instance creation failed => %s: %s" % (e.error_code, e.error_message))
# wait here until the instances are up
num_running = 0
wait_timeout = time.time() + wait_timeout
res_list = ()
while wait_timeout > time.time() and num_running < len(instids):
try:
res_list = ec2.get_all_instances(instids)
except boto.exception.BotoServerError as e:
if e.error_code == 'InvalidInstanceID.NotFound':
time.sleep(1)
continue
else:
raise
num_running = 0
for res in res_list:
num_running += len([i for i in res.instances if i.state == 'running'])
if len(res_list) <= 0:
# got a bad response of some sort, possibly due to
# stale/cached data. Wait a second and then try again
time.sleep(1)
continue
if wait and num_running < len(instids):
time.sleep(5)
else:
break
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="wait for instances running timeout on %s" % time.asctime())
# We do this after the loop ends so that we end up with one list
for res in res_list:
running_instances.extend(res.instances)
# Enabled by default by AWS
if source_dest_check is False:
for inst in res.instances:
inst.modify_attribute('sourceDestCheck', False)
# Disabled by default by AWS
if termination_protection is True:
for inst in res.instances:
inst.modify_attribute('disableApiTermination', True)
# Leave this as late as possible to try and avoid InvalidInstanceID.NotFound
if instance_tags and instids:
try:
ec2.create_tags(instids, instance_tags)
except boto.exception.EC2ResponseError as e:
module.fail_json(msg="Instance tagging failed => %s: %s" % (e.error_code, e.error_message))
instance_dict_array = []
created_instance_ids = []
for inst in running_instances:
inst.update()
d = get_instance_info(inst)
created_instance_ids.append(inst.id)
instance_dict_array.append(d)
return (instance_dict_array, created_instance_ids, changed)
def terminate_instances(module, ec2, instance_ids):
"""
Terminates a list of instances
module: Ansible module object
ec2: authenticated ec2 connection object
termination_list: a list of instances to terminate in the form of
[ {id: <inst-id>}, ..]
Returns a dictionary of instance information
about the instances terminated.
If the instance to be terminated is running
"changed" will be set to False.
"""
# Whether to wait for termination to complete before returning
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
changed = False
instance_dict_array = []
if not isinstance(instance_ids, list) or len(instance_ids) < 1:
module.fail_json(msg='instance_ids should be a list of instances, aborting')
terminated_instance_ids = []
for res in ec2.get_all_instances(instance_ids):
for inst in res.instances:
if inst.state == 'running' or inst.state == 'stopped':
terminated_instance_ids.append(inst.id)
instance_dict_array.append(get_instance_info(inst))
try:
ec2.terminate_instances([inst.id])
except EC2ResponseError as e:
module.fail_json(msg='Unable to terminate instance {0}, error: {1}'.format(inst.id, e))
changed = True
# wait here until the instances are 'terminated'
if wait:
num_terminated = 0
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and num_terminated < len(terminated_instance_ids):
response = ec2.get_all_instances(instance_ids=terminated_instance_ids,
filters={'instance-state-name': 'terminated'})
try:
num_terminated = sum([len(res.instances) for res in response])
except Exception as e:
# got a bad response of some sort, possibly due to
# stale/cached data. Wait a second and then try again
time.sleep(1)
continue
if num_terminated < len(terminated_instance_ids):
time.sleep(5)
# waiting took too long
if wait_timeout < time.time() and num_terminated < len(terminated_instance_ids):
module.fail_json(msg="wait for instance termination timeout on %s" % time.asctime())
# Lets get the current state of the instances after terminating - issue600
instance_dict_array = []
for res in ec2.get_all_instances(instance_ids=terminated_instance_ids, filters={'instance-state-name': 'terminated'}):
for inst in res.instances:
instance_dict_array.append(get_instance_info(inst))
return (changed, instance_dict_array, terminated_instance_ids)
def startstop_instances(module, ec2, instance_ids, state, instance_tags):
"""
Starts or stops a list of existing instances
module: Ansible module object
ec2: authenticated ec2 connection object
instance_ids: The list of instances to start in the form of
[ {id: <inst-id>}, ..]
instance_tags: A dict of tag keys and values in the form of
{key: value, ... }
state: Intended state ("running" or "stopped")
Returns a dictionary of instance information
about the instances started/stopped.
If the instance was not able to change state,
"changed" will be set to False.
Note that if instance_ids and instance_tags are both non-empty,
this method will process the intersection of the two
"""
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
group_id = module.params.get('group_id')
group_name = module.params.get('group')
changed = False
instance_dict_array = []
if not isinstance(instance_ids, list) or len(instance_ids) < 1:
# Fail unless the user defined instance tags
if not instance_tags:
module.fail_json(msg='instance_ids should be a list of instances, aborting')
# To make an EC2 tag filter, we need to prepend 'tag:' to each key.
# An empty filter does no filtering, so it's safe to pass it to the
# get_all_instances method even if the user did not specify instance_tags
filters = {}
if instance_tags:
for key, value in instance_tags.items():
filters["tag:" + key] = value
if module.params.get('id'):
filters['client-token'] = module.params['id']
# Check that our instances are not in the state we want to take
# Check (and eventually change) instances attributes and instances state
existing_instances_array = []
for res in ec2.get_all_instances(instance_ids, filters=filters):
for inst in res.instances:
warn_if_public_ip_assignment_changed(module, inst)
changed = (check_source_dest_attr(module, inst, ec2) or
check_termination_protection(module, inst) or changed)
# Check security groups and if we're using ec2-vpc; ec2-classic security groups may not be modified
if inst.vpc_id and group_name:
grp_details = ec2.get_all_security_groups(filters={'vpc_id': inst.vpc_id})
if isinstance(group_name, string_types):
group_name = [group_name]
unmatched = set(group_name) - set(to_text(grp.name) for grp in grp_details)
if unmatched:
module.fail_json(msg="The following group names are not valid: %s" % ', '.join(unmatched))
group_ids = [to_text(grp.id) for grp in grp_details if to_text(grp.name) in group_name]
elif inst.vpc_id and group_id:
if isinstance(group_id, string_types):
group_id = [group_id]
grp_details = ec2.get_all_security_groups(group_ids=group_id)
group_ids = [grp_item.id for grp_item in grp_details]
if inst.vpc_id and (group_name or group_id):
if set(sg.id for sg in inst.groups) != set(group_ids):
changed = inst.modify_attribute('groupSet', group_ids)
# Check instance state
if inst.state != state:
instance_dict_array.append(get_instance_info(inst))
try:
if state == 'running':
inst.start()
else:
inst.stop()
except EC2ResponseError as e:
module.fail_json(msg='Unable to change state for instance {0}, error: {1}'.format(inst.id, e))
changed = True
existing_instances_array.append(inst.id)
instance_ids = list(set(existing_instances_array + (instance_ids or [])))
# Wait for all the instances to finish starting or stopping
wait_timeout = time.time() + wait_timeout
while wait and wait_timeout > time.time():
instance_dict_array = []
matched_instances = []
for res in ec2.get_all_instances(instance_ids):
for i in res.instances:
if i.state == state:
instance_dict_array.append(get_instance_info(i))
matched_instances.append(i)
if len(matched_instances) < len(instance_ids):
time.sleep(5)
else:
break
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="wait for instances running timeout on %s" % time.asctime())
return (changed, instance_dict_array, instance_ids)
def restart_instances(module, ec2, instance_ids, state, instance_tags):
"""
Restarts a list of existing instances
module: Ansible module object
ec2: authenticated ec2 connection object
instance_ids: The list of instances to start in the form of
[ {id: <inst-id>}, ..]
instance_tags: A dict of tag keys and values in the form of
{key: value, ... }
state: Intended state ("restarted")
Returns a dictionary of instance information
about the instances.
If the instance was not able to change state,
"changed" will be set to False.
Wait will not apply here as this is a OS level operation.
Note that if instance_ids and instance_tags are both non-empty,
this method will process the intersection of the two.
"""
changed = False
instance_dict_array = []
if not isinstance(instance_ids, list) or len(instance_ids) < 1:
# Fail unless the user defined instance tags
if not instance_tags:
module.fail_json(msg='instance_ids should be a list of instances, aborting')
# To make an EC2 tag filter, we need to prepend 'tag:' to each key.
# An empty filter does no filtering, so it's safe to pass it to the
# get_all_instances method even if the user did not specify instance_tags
filters = {}
if instance_tags:
for key, value in instance_tags.items():
filters["tag:" + key] = value
if module.params.get('id'):
filters['client-token'] = module.params['id']
# Check that our instances are not in the state we want to take
# Check (and eventually change) instances attributes and instances state
for res in ec2.get_all_instances(instance_ids, filters=filters):
for inst in res.instances:
warn_if_public_ip_assignment_changed(module, inst)
changed = (check_source_dest_attr(module, inst, ec2) or
check_termination_protection(module, inst) or changed)
# Check instance state
if inst.state != state:
instance_dict_array.append(get_instance_info(inst))
try:
inst.reboot()
except EC2ResponseError as e:
module.fail_json(msg='Unable to change state for instance {0}, error: {1}'.format(inst.id, e))
changed = True
return (changed, instance_dict_array, instance_ids)
def check_termination_protection(module, inst):
"""
Check the instance disableApiTermination attribute.
module: Ansible module object
inst: EC2 instance object
returns: True if state changed None otherwise
"""
termination_protection = module.params.get('termination_protection')
if (inst.get_attribute('disableApiTermination')['disableApiTermination'] != termination_protection and termination_protection is not None):
inst.modify_attribute('disableApiTermination', termination_protection)
return True
def check_source_dest_attr(module, inst, ec2):
"""
Check the instance sourceDestCheck attribute.
module: Ansible module object
inst: EC2 instance object
returns: True if state changed None otherwise
"""
source_dest_check = module.params.get('source_dest_check')
if source_dest_check is not None:
try:
if inst.vpc_id is not None and inst.get_attribute('sourceDestCheck')['sourceDestCheck'] != source_dest_check:
inst.modify_attribute('sourceDestCheck', source_dest_check)
return True
except boto.exception.EC2ResponseError as exc:
# instances with more than one Elastic Network Interface will
# fail, because they have the sourceDestCheck attribute defined
# per-interface
if exc.code == 'InvalidInstanceID':
for interface in inst.interfaces:
if interface.source_dest_check != source_dest_check:
ec2.modify_network_interface_attribute(interface.id, "sourceDestCheck", source_dest_check)
return True
else:
module.fail_json(msg='Failed to handle source_dest_check state for instance {0}, error: {1}'.format(inst.id, exc),
exception=traceback.format_exc())
def warn_if_public_ip_assignment_changed(module, instance):
# This is a non-modifiable attribute.
assign_public_ip = module.params.get('assign_public_ip')
# Check that public ip assignment is the same and warn if not
public_dns_name = getattr(instance, 'public_dns_name', None)
if (assign_public_ip or public_dns_name) and (not public_dns_name or assign_public_ip is False):
module.warn("Unable to modify public ip assignment to {0} for instance {1}. "
"Whether or not to assign a public IP is determined during instance creation.".format(assign_public_ip, instance.id))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
key_name=dict(aliases=['keypair']),
id=dict(),
group=dict(type='list', aliases=['groups']),
group_id=dict(type='list'),
zone=dict(aliases=['aws_zone', 'ec2_zone']),
instance_type=dict(aliases=['type']),
spot_price=dict(),
spot_type=dict(default='one-time', choices=["one-time", "persistent"]),
spot_launch_group=dict(),
image=dict(),
kernel=dict(),
count=dict(type='int', default='1'),
monitoring=dict(type='bool', default=False),
ramdisk=dict(),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300),
spot_wait_timeout=dict(type='int', default=600),
placement_group=dict(),
user_data=dict(),
instance_tags=dict(type='dict'),
vpc_subnet_id=dict(),
assign_public_ip=dict(type='bool'),
private_ip=dict(),
instance_profile_name=dict(),
instance_ids=dict(type='list', aliases=['instance_id']),
source_dest_check=dict(type='bool', default=None),
termination_protection=dict(type='bool', default=None),
state=dict(default='present', choices=['present', 'absent', 'running', 'restarted', 'stopped']),
instance_initiated_shutdown_behavior=dict(default='stop', choices=['stop', 'terminate']),
exact_count=dict(type='int', default=None),
count_tag=dict(type='raw'),
volumes=dict(type='list'),
ebs_optimized=dict(type='bool', default=False),
tenancy=dict(default='default', choices=['default', 'dedicated']),
network_interfaces=dict(type='list', aliases=['network_interface'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
# Can be uncommented when we finish the deprecation cycle.
# ['group', 'group_id'],
['exact_count', 'count'],
['exact_count', 'state'],
['exact_count', 'instance_ids'],
['network_interfaces', 'assign_public_ip'],
['network_interfaces', 'group'],
['network_interfaces', 'group_id'],
['network_interfaces', 'private_ip'],
['network_interfaces', 'vpc_subnet_id'],
],
)
if module.params.get('group') and module.params.get('group_id'):
module.deprecate(
msg='Support for passing both group and group_id has been deprecated. '
'Currently group_id is ignored, in future passing both will result in an error',
version='2.14')
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
try:
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module)
if module.params.get('region') or not module.params.get('ec2_url'):
ec2 = ec2_connect(module)
elif module.params.get('ec2_url'):
ec2 = connect_ec2_endpoint(ec2_url, **aws_connect_kwargs)
if 'region' not in aws_connect_kwargs:
aws_connect_kwargs['region'] = ec2.region
vpc = connect_vpc(**aws_connect_kwargs)
except boto.exception.NoAuthHandlerFound as e:
module.fail_json(msg="Failed to get connection: %s" % e.message, exception=traceback.format_exc())
tagged_instances = []
state = module.params['state']
if state == 'absent':
instance_ids = module.params['instance_ids']
if not instance_ids:
module.fail_json(msg='instance_ids list is required for absent state')
(changed, instance_dict_array, new_instance_ids) = terminate_instances(module, ec2, instance_ids)
elif state in ('running', 'stopped'):
instance_ids = module.params.get('instance_ids')
instance_tags = module.params.get('instance_tags')
if not (isinstance(instance_ids, list) or isinstance(instance_tags, dict)):
module.fail_json(msg='running list needs to be a list of instances or set of tags to run: %s' % instance_ids)
(changed, instance_dict_array, new_instance_ids) = startstop_instances(module, ec2, instance_ids, state, instance_tags)
elif state in ('restarted'):
instance_ids = module.params.get('instance_ids')
instance_tags = module.params.get('instance_tags')
if not (isinstance(instance_ids, list) or isinstance(instance_tags, dict)):
module.fail_json(msg='running list needs to be a list of instances or set of tags to run: %s' % instance_ids)
(changed, instance_dict_array, new_instance_ids) = restart_instances(module, ec2, instance_ids, state, instance_tags)
elif state == 'present':
# Changed is always set to true when provisioning new instances
if not module.params.get('image'):
module.fail_json(msg='image parameter is required for new instance')
if module.params.get('exact_count') is None:
(instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2, vpc)
else:
(tagged_instances, instance_dict_array, new_instance_ids, changed) = enforce_count(module, ec2, vpc)
# Always return instances in the same order
if new_instance_ids:
new_instance_ids.sort()
if instance_dict_array:
instance_dict_array.sort(key=lambda x: x['id'])
if tagged_instances:
tagged_instances.sort(key=lambda x: x['id'])
module.exit_json(changed=changed, instance_ids=new_instance_ids, instances=instance_dict_array, tagged_instances=tagged_instances)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,213 |
docker_container does not override image defined healthcheck when test: ["NONE"]
|
As per the docs the following should override the image defined health check. But it fails to override the image defined health check. This is important where the image defined health check is not applicable. One such example is using the creating a MySQL cluster in that the default health check is against mysqld which is not appropriate because either ndbd or ndb_mgmd is appropriate.
This leaves the container marked as unhealthy.
Using;
healthcheck:
test: ["NONE"]
If you use the above to launch MySQL the default health check is used, inspect output following.
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/healthcheck.sh"
]
},
Resulting in;
21b30279e77d mysql/mysql-cluster:8.0.18 "/entrypoint.sh mysq…" 3 minutes ago Up 2 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql
0fb50981bba5 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (unhealthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-2
8746135e3f8e mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (unhealthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-1
420fb8249df8 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndb_…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-mgmd
The work around is to call a stub shell which simply calls exit 0.
healthcheck:
test: ["CMD-SHELL", "{{ mysql_config_directory }}/healthcheck.sh"]
Resulting in;
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/etc/cell/dev/mysql/healthcheck.sh"
]
},
And;
aaf28f87abf0 mysql/mysql-cluster:8.0.18 "/entrypoint.sh mysq…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql
2df7948c37fc mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-2
1a3cd97cfc80 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-1
5fea532ef20f mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndb_…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-mgmd
|
https://github.com/ansible/ansible/issues/66213
|
https://github.com/ansible/ansible/pull/66599
|
d6f2b4e788ed13756ba4e4a05b8b7a879900dbc3
|
5c1a3a3ac2086119bd16316dde379047d90cd86c
| 2020-01-06T15:46:24Z |
python
| 2020-02-03T18:13:17Z |
changelogs/fragments/66599-docker-healthcheck.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,213 |
docker_container does not override image defined healthcheck when test: ["NONE"]
|
As per the docs the following should override the image defined health check. But it fails to override the image defined health check. This is important where the image defined health check is not applicable. One such example is using the creating a MySQL cluster in that the default health check is against mysqld which is not appropriate because either ndbd or ndb_mgmd is appropriate.
This leaves the container marked as unhealthy.
Using;
healthcheck:
test: ["NONE"]
If you use the above to launch MySQL the default health check is used, inspect output following.
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/healthcheck.sh"
]
},
Resulting in;
21b30279e77d mysql/mysql-cluster:8.0.18 "/entrypoint.sh mysq…" 3 minutes ago Up 2 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql
0fb50981bba5 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (unhealthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-2
8746135e3f8e mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (unhealthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-1
420fb8249df8 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndb_…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-mgmd
The work around is to call a stub shell which simply calls exit 0.
healthcheck:
test: ["CMD-SHELL", "{{ mysql_config_directory }}/healthcheck.sh"]
Resulting in;
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/etc/cell/dev/mysql/healthcheck.sh"
]
},
And;
aaf28f87abf0 mysql/mysql-cluster:8.0.18 "/entrypoint.sh mysq…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql
2df7948c37fc mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-2
1a3cd97cfc80 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-1
5fea532ef20f mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndb_…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-mgmd
|
https://github.com/ansible/ansible/issues/66213
|
https://github.com/ansible/ansible/pull/66599
|
d6f2b4e788ed13756ba4e4a05b8b7a879900dbc3
|
5c1a3a3ac2086119bd16316dde379047d90cd86c
| 2020-01-06T15:46:24Z |
python
| 2020-02-03T18:13:17Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with C(--check) and C(--diff) to view config difference and list of actions to be taken.
version_added: "2.1"
notes:
- For most config changes, the container needs to be recreated, i.e. the existing container has to be destroyed and
a new one created. This can cause unexpected data loss and downtime. You can use the I(comparisons) option to
prevent this.
- If the module needs to recreate the container, it will only use the options provided to the module to create the
new container (except I(image)). Therefore, always specify *all* options relevant to the container.
- When I(restart) is set to C(true), the module will only restart the container if no config changes are detected.
Please note that several options have default values; if the container to be restarted uses different values for
these options, it will be recreated instead. The options with default values which can cause this are I(auto_remove),
I(detach), I(init), I(interactive), I(memory), I(paused), I(privileged), I(read_only) and I(tty). This behavior
can be changed by setting I(container_default_behavior) to C(no_defaults), which will be the default value from
Ansible 2.14 on.
options:
auto_remove:
description:
- Enable auto-removal of the container on daemon side when the container's process exits.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
elements: str
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
elements: str
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts. A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not.
- Only options which correspond to the state of a container as handled by the
Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to *all* comparisons which are not explicitly set to other values.
- See the examples for details.
type: dict
version_added: "2.8"
container_default_behavior:
description:
- Various module options used to have default values. This causes problems with
containers which use different values for these options.
- The default value is C(compatibility), which will ensure that the default values
are used when the values are not explicitly specified by the user.
- From Ansible 2.14 on, the default value will switch to C(no_defaults). To avoid
deprecation warnings, please set I(container_default_behavior) to an explicit
value.
- This affects the I(auto_remove), I(detach), I(init), I(interactive), I(memory),
I(paused), I(privileged), I(read_only) and I(tty) options.
type: str
choices:
- compatibility
- no_defaults
version_added: "2.10"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period.
- See I(cpus) for an easier to use alternative.
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota.
- See I(cpus) for an easier to use alternative.
type: int
cpus:
description:
- Specify how much of the available CPU resources a container can use.
- A value of C(1.5) means that at most one and a half CPU (core) will be used.
type: float
version_added: '2.10'
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1).
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
- If disabled, the task will reflect the status of the container run (failed if the command failed).
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(yes).
type: bool
devices:
description:
- List of host device bindings to add to the container.
- "Each binding is a mapping expressed in the format C(<path_on_host>:<path_in_container>:<cgroup_permissions>)."
type: list
elements: str
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- List of DNS options.
type: list
elements: str
dns_servers:
description:
- List of custom DNS servers.
type: list
elements: str
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
elements: str
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in I(env), then the I(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default C(ENTRYPOINT) of the image.
type: list
elements: str
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's C(/etc/hosts) file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
- If the port is already exposed using C(EXPOSE) in a Dockerfile, it does not
need to be exposed again.
type: list
elements: str
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
elements: str
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
- "See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work."
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
- The default used by the Docker daemon is C(30s).
type: str
timeout:
description:
- Maximum time to allow one check to run.
- The default used by the Docker daemon is C(30s).
type: str
retries:
description:
- Consecutive number of failures needed to report unhealthy.
- The default used by the Docker daemon is C(3).
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
- The default used by the Docker daemon is C(0s).
type: str
version_added: "2.8"
hostname:
description:
- The container's hostname.
type: str
ignore_image:
description:
- When I(state) is C(present) or C(started), the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If the image
version in the registry does not match the container, the container will be recreated. You can
stop this behavior by setting I(ignore_image) to C(True).
- "*Warning:* This option is ignored if C(image: ignore) or C(*: ignore) is specified in the
I(comparisons) option."
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The I(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
- This option requires Docker API >= 1.25.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ipc_mode:
description:
- Set the IPC mode for the container.
- Can be one of C(container:<name|id>) to reuse another container's IPC namespace or C(host) to use
the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
elements: str
log_driver:
description:
- Specify the logging driver. Docker uses C(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen I(log_driver).
- See U(https://docs.docker.com/engine/admin/logging/overview/) for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33).
type: str
memory:
description:
- "Memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C("0").
type: str
memory_reservation:
description:
- "Memory soft limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap) in format C(<number>[<unit>]).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited
from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
elements: dict
description:
- Specification for mounts to be added to the container. More powerful alternative to I(volumes).
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- bind
- npipe
- tmpfs
- volume
default: volume
read_only:
description:
- Whether the mount should be read-only.
type: bool
consistency:
description:
- The consistency requirement for the mount.
type: str
choices:
- cached
- consistent
- default
- delegated
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- private
- rprivate
- shared
- rshared
- slave
- rslave
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See
L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes in format <number>[<unit>]."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are C(bridge), C(host), C(none), C(container:<name|id>), C(<network_name>) or C(default).
- "*Note* that from Ansible 2.14 on, if I(networks_cli_compatible) is C(true) and I(networks) contains at least one network,
the default value for I(network_mode) will be the name of the first network in the I(networks) list. You can prevent this
by explicitly specifying a value for I(network_mode), like the default value C(default) which will be used by Docker if
I(network_mode) is not specified."
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value are C(host) and the empty string.
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the I(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if I(networks) is specified. You need to explicitly use I(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in I(networks)).
Alternatively, use the I(networks_cli_compatible) option, which will be enabled by default from Ansible 2.12 on.
type: list
elements: dict
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
elements: str
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
elements: str
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with I(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will *not* add the default network if I(networks) is
specified. If I(networks) is not specified, the default network will be attached."
- "*Note* that docker CLI also sets I(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set I(network_mode) to the name of the first network you're
adding. This behavior will change for Ansible 2.14: then I(network_mode) will
automatically be set to the first network name in I(networks) if I(network_mode)
is not specified, I(networks) has at least one entry and I(networks_cli_compatible)
is C(true)."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed.
- Only effective when I(log_driver) is set to C(json-file) or C(journald).
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports C(host). Newer versions of the
Docker SDK for Python (docker) allow all values supported by the Docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set C(-1) for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are *not* allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If I(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter C(com.docker.network.bridge.host_binding_ipv4).
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by C(com.docker.network.bridge.host_binding_ipv4).
Note that the first bridge network with a C(com.docker.network.bridge.host_binding_ipv4)
value encountered in the list of I(networks) is the one that will be used.
type: list
elements: str
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- "*Note:* images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled."
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in I(networks) parameter.
- Any default networks such as C(bridge), if not found in I(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
removal_wait_timeout:
description:
- When removing an existing container, the docker daemon API call exists after the container
is scheduled for removal. Removal usually is very fast, but it can happen that during high I/O
load, removal can take longer. By default, the module will wait until the container has been
removed before trying to (re-)create it, however long this takes.
- By setting this option, the module will wait at most this many seconds for the container to be
removed. If the container is still in the removal phase after this many seconds, the module will
fail.
type: float
version_added: "2.10"
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy.
- Place quotes around C(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) in format C(<number>[<unit>]). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, Docker daemon uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User").
type: list
elements: str
state:
description:
- 'C(absent) - A container matching the specified name will be stopped and removed. Use I(force_kill) to kill the container
rather than stopping it. Use I(keep_volumes) to retain volumes associated with the removed container.'
- 'C(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config.'
- 'C(started) - Asserts that the container is first C(present), and then if the container is not running moves it to a running
state. Use I(restart) to force a matching container to be stopped and restarted.'
- 'C(stopped) - Asserts that the container is first C(present), and then if the container is running moves it to a stopped
state.'
- To control what will be taken into account when comparing configuration, see the I(comparisons) option. To avoid that the
image version will be taken into account, you can also use the I(ignore_image) option.
- Use the I(recreate) option to always force re-creation of a matching container, even if it is running.
- If the container should be killed instead of stopped in case it needs to be stopped for recreation, or because I(state) is
C(stopped), please use the I(force_kill) option. Use I(keep_volumes) to retain volumes associated with a removed container.
- Use I(keep_volumes) to retain volumes associated with a removed container.
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending C(SIGKILL).
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
- The option has never been used by the module. It will be removed in Ansible 2.14.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory.
type: list
elements: str
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)."
type: list
elements: str
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be of the forms C(user), C(user:group), C(uid), C(uid:gid), C(user:gid) or C(uid:group)."
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
elements: str
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or IDs to get volumes from.
type: list
elements: str
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if I(state) is C(absent)
- If I(detached) is C(false), will include C(Output) attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from time import sleep
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpus = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.removal_wait_timeout = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.cpus is not None:
self.cpus = int(round(self.cpus * 1E9))
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
restart_policy='restart_policy',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if key == 'restart_policy' and self.client.option_minimal_versions[value]['supported']:
restart_policy = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
result[key] = restart_policy
elif self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(parts) == 2:
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
# Only pass anonymous volumes to create container
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
continue
if len(parts) == 2:
if not is_volume_permissions(parts[1]):
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
nano_cpus='cpus',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (parts + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def removing(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Status') == 'removing'
return False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
cpus=host_config.get('NanoCpus'),
)
# Options which don't make sense without their accompanying option
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
restart_policy = host_config.get('RestartPolicy', dict())
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
restart_policy=restart_policy.get('Name')
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
restart_policy = host_config.get('RestartPolicy') or dict()
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
restart_policy=restart_policy.get('Name')
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
network_info = connected_networks.get(network['name'])
if network_info is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
network_info_ipam = network_info.get('IPAMConfig') or {}
if network.get('ipv4_address') and network['ipv4_address'] != network_info_ipam.get('IPv4Address'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != network_info_ipam.get('IPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], network_info.get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, network_info.get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=network_info_ipam.get('IPv4Address'),
ipv6_address=network_info_ipam.get('IPv6Address'),
aliases=network_info.get('Aliases'),
links=network_info.get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = parts + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
# We only expect anonymous volumes to show up in the list
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
continue
if len(parts) == 2:
if not is_volume_permissions(parts[1]):
continue
expected_vols[vol] = dict()
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def wait_for_state(self, container_id, complete_states=None, wait_states=None, accept_removal=False, max_wait=None):
delay = 1.0
total_wait = 0
while True:
# Inspect container
result = self.client.get_container_by_id(container_id)
if result is None:
if accept_removal:
return
msg = 'Encontered vanished container while waiting for container "{0}"'
self.fail(msg.format(container_id))
# Check container state
state = result.get('State', {}).get('Status')
if complete_states is not None and state in complete_states:
return
if wait_states is not None and state not in wait_states:
msg = 'Encontered unexpected state "{1}" while waiting for container "{0}"'
self.fail(msg.format(container_id, state))
# Wait
if max_wait is not None:
if total_wait > max_wait:
msg = 'Timeout of {1} seconds exceeded while waiting for container "{0}"'
self.fail(msg.format(container_id, max_wait))
if total_wait + delay > max_wait:
delay = max_wait - total_wait
sleep(delay)
total_wait += delay
# Exponential backoff, but never wait longer than 10 seconds
# (1.1**24 < 10, 1.1**25 > 10, so it will take 25 iterations
# until the maximal 10 seconds delay is reached. By then, the
# code will have slept for ~1.5 minutes.)
delay = min(delay * 1.1, 10)
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists or container.removing:
# New container
if container.removing:
self.log('Found container in removal phase')
else:
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
if container.removing and not self.check_mode:
# Wait for container to be removed before trying to create it
self.wait_for_state(
container.Id, wait_states=['removing'], accept_removal=True, max_wait=self.parameters.removal_wait_timeout)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
if not self.check_mode:
self.wait_for_state(
container.Id, wait_states=['removing'], accept_removal=True, max_wait=self.parameters.removal_wait_timeout)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and self.parameters.paused is not None and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not image or self.parameters.pull:
if not self.check_mode:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
elif not image:
# If the image isn't there, claim we'll pull.
# (Implicitly: if the image is there, claim it already was latest.)
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if self.parameters.detach is False:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused', 'removal_wait_timeout'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
cpus=dict(docker_py_version='2.3.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', docker_api_version='1.22',
detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
if self.module.params['container_default_behavior'] is None:
self.module.params['container_default_behavior'] = 'compatibility'
self.module.deprecate(
'The container_default_behavior option will change its default value from "compatibility" to '
'"no_defaults" in Ansible 2.14. To remove this warning, please specify an explicit value for it now',
version='2.14'
)
if self.module.params['container_default_behavior'] == 'compatibility':
old_default_values = dict(
auto_remove=False,
detach=True,
init=False,
interactive=False,
memory="0",
paused=False,
privileged=False,
read_only=False,
tty=False,
)
for param, value in old_default_values.items():
if self.module.params[param] is None:
self.module.params[param] = value
def main():
argument_spec = dict(
auto_remove=dict(type='bool'),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
container_default_behavior=dict(type='str', choices=['compatibility', 'no_defaults']),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpus=dict(type='float'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool'),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool'),
interactive=dict(type='bool'),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool'),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool'),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool'),
recreate=dict(type='bool', default=False),
removal_wait_timeout=dict(type='float'),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False, removed_in_version='2.14'),
tty=dict(type='bool'),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
if client.module.params['networks_cli_compatible'] is True and client.module.params['networks'] and client.module.params['network_mode'] is None:
client.module.deprecate(
'Please note that the default value for `network_mode` will change from not specified '
'(which is equal to `default`) to the name of the first network in `networks` if '
'`networks` has at least one entry and `networks_cli_compatible` is `true`. You can '
'change the behavior now by explicitly setting `network_mode` to the name of the first '
'network in `networks`, and remove this warning by setting `network_mode` to `default`. '
'Please make sure that the value you set to `network_mode` equals the inspection result '
'for existing containers, otherwise the module will recreate them. You can find out the '
'correct value by running "docker inspect --format \'{{.HostConfig.NetworkMode}}\' <container_name>"',
version='2.14'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,213 |
docker_container does not override image defined healthcheck when test: ["NONE"]
|
As per the docs the following should override the image defined health check. But it fails to override the image defined health check. This is important where the image defined health check is not applicable. One such example is using the creating a MySQL cluster in that the default health check is against mysqld which is not appropriate because either ndbd or ndb_mgmd is appropriate.
This leaves the container marked as unhealthy.
Using;
healthcheck:
test: ["NONE"]
If you use the above to launch MySQL the default health check is used, inspect output following.
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/healthcheck.sh"
]
},
Resulting in;
21b30279e77d mysql/mysql-cluster:8.0.18 "/entrypoint.sh mysq…" 3 minutes ago Up 2 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql
0fb50981bba5 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (unhealthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-2
8746135e3f8e mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (unhealthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-1
420fb8249df8 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndb_…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-mgmd
The work around is to call a stub shell which simply calls exit 0.
healthcheck:
test: ["CMD-SHELL", "{{ mysql_config_directory }}/healthcheck.sh"]
Resulting in;
"Healthcheck": {
"Test": [
"CMD-SHELL",
"/etc/cell/dev/mysql/healthcheck.sh"
]
},
And;
aaf28f87abf0 mysql/mysql-cluster:8.0.18 "/entrypoint.sh mysq…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql
2df7948c37fc mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-2
1a3cd97cfc80 mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndbd" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-ndbd-1
5fea532ef20f mysql/mysql-cluster:8.0.18 "/entrypoint.sh ndb_…" 3 minutes ago Up 3 minutes (healthy) 1186/tcp, 2202/tcp, 3306/tcp, 33060/tcp dev-mysql-mgmd
|
https://github.com/ansible/ansible/issues/66213
|
https://github.com/ansible/ansible/pull/66599
|
d6f2b4e788ed13756ba4e4a05b8b7a879900dbc3
|
5c1a3a3ac2086119bd16316dde379047d90cd86c
| 2020-01-06T15:46:24Z |
python
| 2020-02-03T18:13:17Z |
lib/ansible/modules/cloud/docker/docker_swarm_service.py
|
#!/usr/bin/python
#
# (c) 2017, Dario Zanzico ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: docker_swarm_service
author:
- "Dario Zanzico (@dariko)"
- "Jason Witkowski (@jwitko)"
- "Hannes Ljungberg (@hannseman)"
short_description: docker swarm service
description:
- Manages docker services via a swarm manager node.
version_added: "2.7"
options:
args:
description:
- List arguments to be passed to the container.
- Corresponds to the C(ARG) parameter of C(docker service create).
type: list
elements: str
command:
description:
- Command to execute when the container starts.
- A command may be either a string or a list or a list of strings.
- Corresponds to the C(COMMAND) parameter of C(docker service create).
type: raw
version_added: 2.8
configs:
description:
- List of dictionaries describing the service configs.
- Corresponds to the C(--config) option of C(docker service create).
- Requires API version >= 1.30.
type: list
elements: dict
suboptions:
config_id:
description:
- Config's ID.
type: str
config_name:
description:
- Config's name as defined at its creation.
type: str
required: yes
filename:
description:
- Name of the file containing the config. Defaults to the I(config_name) if not specified.
type: str
uid:
description:
- UID of the config file's owner.
type: str
gid:
description:
- GID of the config file's group.
type: str
mode:
description:
- File access mode inside the container. Must be an octal number (like C(0644) or C(0444)).
type: int
constraints:
description:
- List of the service constraints.
- Corresponds to the C(--constraint) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(placement.constraints) instead.
type: list
elements: str
container_labels:
description:
- Dictionary of key value pairs.
- Corresponds to the C(--container-label) option of C(docker service create).
type: dict
dns:
description:
- List of custom DNS servers.
- Corresponds to the C(--dns) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
dns_search:
description:
- List of custom DNS search domains.
- Corresponds to the C(--dns-search) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
dns_options:
description:
- List of custom DNS options.
- Corresponds to the C(--dns-option) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
endpoint_mode:
description:
- Service endpoint mode.
- Corresponds to the C(--endpoint-mode) option of C(docker service create).
- Requires API version >= 1.25.
type: str
choices:
- vip
- dnsrr
env:
description:
- List or dictionary of the service environment variables.
- If passed a list each items need to be in the format of C(KEY=VALUE).
- If passed a dictionary values which might be parsed as numbers,
booleans or other types by the YAML parser must be quoted (e.g. C("true"))
in order to avoid data loss.
- Corresponds to the C(--env) option of C(docker service create).
type: raw
env_files:
description:
- List of paths to files, present on the target, containing environment variables C(FOO=BAR).
- The order of the list is significant in determining the value assigned to a
variable that shows up more than once.
- If variable also present in I(env), then I(env) value will override.
type: list
elements: path
version_added: "2.8"
force_update:
description:
- Force update even if no changes require it.
- Corresponds to the C(--force) option of C(docker service update).
- Requires API version >= 1.25.
type: bool
default: no
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
- Corresponds to the C(--group) option of C(docker service update).
- Requires API version >= 1.25.
type: list
elements: str
version_added: "2.8"
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Requires API version >= 1.25.
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
type: str
timeout:
description:
- Maximum time to allow one check to run.
type: str
retries:
description:
- Consecutive failures needed to report unhealthy. It accept integer value.
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
- Corresponds to the C(--hostname) option of C(docker service create).
- Requires API version >= 1.25.
type: str
hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
- Corresponds to the C(--host) option of C(docker service create).
- Requires API version >= 1.25.
type: dict
version_added: "2.8"
image:
description:
- Service image path and tag.
- Corresponds to the C(IMAGE) parameter of C(docker service create).
type: str
labels:
description:
- Dictionary of key value pairs.
- Corresponds to the C(--label) option of C(docker service create).
type: dict
limits:
description:
- Configures service resource limits.
suboptions:
cpus:
description:
- Service CPU limit. C(0) equals no limit.
- Corresponds to the C(--limit-cpu) option of C(docker service create).
type: float
memory:
description:
- "Service memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no limit.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--limit-memory) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
limit_cpu:
description:
- Service CPU limit. C(0) equals no limit.
- Corresponds to the C(--limit-cpu) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(limits.cpus) instead.
type: float
limit_memory:
description:
- "Service memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no limit.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--limit-memory) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(limits.memory) instead.
type: str
logging:
description:
- "Logging configuration for the service."
suboptions:
driver:
description:
- Configure the logging driver for a service.
- Corresponds to the C(--log-driver) option of C(docker service create).
type: str
options:
description:
- Options for service logging driver.
- Corresponds to the C(--log-opt) option of C(docker service create).
type: dict
type: dict
version_added: "2.8"
log_driver:
description:
- Configure the logging driver for a service.
- Corresponds to the C(--log-driver) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(logging.driver) instead.
type: str
log_driver_options:
description:
- Options for service logging driver.
- Corresponds to the C(--log-opt) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(logging.options) instead.
type: dict
mode:
description:
- Service replication mode.
- Service will be removed and recreated when changed.
- Corresponds to the C(--mode) option of C(docker service create).
type: str
default: replicated
choices:
- replicated
- global
mounts:
description:
- List of dictionaries describing the service mounts.
- Corresponds to the C(--mount) option of C(docker service create).
type: list
elements: dict
suboptions:
source:
description:
- Mount source (e.g. a volume name or a host path).
- Must be specified if I(type) is not C(tmpfs).
type: str
target:
description:
- Container path.
type: str
required: yes
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows. Also note that C(npipe) was added in Ansible 2.9.
type: str
default: bind
choices:
- bind
- volume
- tmpfs
- npipe
readonly:
description:
- Whether the mount should be read-only.
type: bool
labels:
description:
- Volume labels to apply.
type: dict
version_added: "2.8"
propagation:
description:
- The propagation mode to use.
- Can only be used when I(mode) is C(bind).
type: str
choices:
- shared
- slave
- private
- rshared
- rslave
- rprivate
version_added: "2.8"
no_copy:
description:
- Disable copying of data from a container when a volume is created.
- Can only be used when I(mode) is C(volume).
type: bool
version_added: "2.8"
driver_config:
description:
- Volume driver configuration.
- Can only be used when I(mode) is C(volume).
suboptions:
name:
description:
- Name of the volume-driver plugin to use for the volume.
type: str
options:
description:
- Options as key-value pairs to pass to the driver for this volume.
type: dict
type: dict
version_added: "2.8"
tmpfs_size:
description:
- "Size of the tmpfs mount in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Can only be used when I(mode) is C(tmpfs).
type: str
version_added: "2.8"
tmpfs_mode:
description:
- File mode of the tmpfs in octal.
- Can only be used when I(mode) is C(tmpfs).
type: int
version_added: "2.8"
name:
description:
- Service name.
- Corresponds to the C(--name) option of C(docker service create).
type: str
required: yes
networks:
description:
- List of the service networks names or dictionaries.
- When passed dictionaries valid sub-options are I(name), which is required, and
I(aliases) and I(options).
- Prior to API version 1.29, updating and removing networks is not supported.
If changes are made the service will then be removed and recreated.
- Corresponds to the C(--network) option of C(docker service create).
type: list
elements: raw
placement:
description:
- Configures service placement preferences and constraints.
suboptions:
constraints:
description:
- List of the service constraints.
- Corresponds to the C(--constraint) option of C(docker service create).
type: list
elements: str
preferences:
description:
- List of the placement preferences as key value pairs.
- Corresponds to the C(--placement-pref) option of C(docker service create).
- Requires API version >= 1.27.
type: list
elements: dict
type: dict
version_added: "2.8"
publish:
description:
- List of dictionaries describing the service published ports.
- Corresponds to the C(--publish) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: dict
suboptions:
published_port:
description:
- The port to make externally available.
type: int
required: yes
target_port:
description:
- The port inside the container to expose.
type: int
required: yes
protocol:
description:
- What protocol to use.
type: str
default: tcp
choices:
- tcp
- udp
mode:
description:
- What publish mode to use.
- Requires API version >= 1.32.
type: str
choices:
- ingress
- host
read_only:
description:
- Mount the containers root filesystem as read only.
- Corresponds to the C(--read-only) option of C(docker service create).
type: bool
version_added: "2.8"
replicas:
description:
- Number of containers instantiated in the service. Valid only if I(mode) is C(replicated).
- If set to C(-1), and service is not present, service replicas will be set to C(1).
- If set to C(-1), and service is present, service replicas will be unchanged.
- Corresponds to the C(--replicas) option of C(docker service create).
type: int
default: -1
reservations:
description:
- Configures service resource reservations.
suboptions:
cpus:
description:
- Service CPU reservation. C(0) equals no reservation.
- Corresponds to the C(--reserve-cpu) option of C(docker service create).
type: float
memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
reserve_cpu:
description:
- Service CPU reservation. C(0) equals no reservation.
- Corresponds to the C(--reserve-cpu) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(reservations.cpus) instead.
type: float
reserve_memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(reservations.memory) instead.
type: str
resolve_image:
description:
- If the current image digest should be resolved from registry and updated if changed.
- Requires API version >= 1.30.
type: bool
default: no
version_added: 2.8
restart_config:
description:
- Configures if and how to restart containers when they exit.
suboptions:
condition:
description:
- Restart condition of the service.
- Corresponds to the C(--restart-condition) option of C(docker service create).
type: str
choices:
- none
- on-failure
- any
delay:
description:
- Delay between restarts.
- "Accepts a a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-delay) option of C(docker service create).
type: str
max_attempts:
description:
- Maximum number of service restarts.
- Corresponds to the C(--restart-condition) option of C(docker service create).
type: int
window:
description:
- Restart policy evaluation window.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-window) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
restart_policy:
description:
- Restart condition of the service.
- Corresponds to the C(--restart-condition) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.condition) instead.
type: str
choices:
- none
- on-failure
- any
restart_policy_attempts:
description:
- Maximum number of service restarts.
- Corresponds to the C(--restart-condition) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.max_attempts) instead.
type: int
restart_policy_delay:
description:
- Delay between restarts.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-delay) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.delay) instead.
type: raw
restart_policy_window:
description:
- Restart policy evaluation window.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-window) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.window) instead.
type: raw
rollback_config:
description:
- Configures how the service should be rolled back in case of a failing update.
suboptions:
parallelism:
description:
- The number of containers to rollback at a time. If set to 0, all containers rollback simultaneously.
- Corresponds to the C(--rollback-parallelism) option of C(docker service create).
- Requires API version >= 1.28.
type: int
delay:
description:
- Delay between task rollbacks.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--rollback-delay) option of C(docker service create).
- Requires API version >= 1.28.
type: str
failure_action:
description:
- Action to take in case of rollback failure.
- Corresponds to the C(--rollback-failure-action) option of C(docker service create).
- Requires API version >= 1.28.
type: str
choices:
- continue
- pause
monitor:
description:
- Duration after each task rollback to monitor for failure.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--rollback-monitor) option of C(docker service create).
- Requires API version >= 1.28.
type: str
max_failure_ratio:
description:
- Fraction of tasks that may fail during a rollback.
- Corresponds to the C(--rollback-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.28.
type: float
order:
description:
- Specifies the order of operations during rollbacks.
- Corresponds to the C(--rollback-order) option of C(docker service create).
- Requires API version >= 1.29.
type: str
type: dict
version_added: "2.8"
secrets:
description:
- List of dictionaries describing the service secrets.
- Corresponds to the C(--secret) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: dict
suboptions:
secret_id:
description:
- Secret's ID.
type: str
secret_name:
description:
- Secret's name as defined at its creation.
type: str
required: yes
filename:
description:
- Name of the file containing the secret. Defaults to the I(secret_name) if not specified.
- Corresponds to the C(target) key of C(docker service create --secret).
type: str
uid:
description:
- UID of the secret file's owner.
type: str
gid:
description:
- GID of the secret file's group.
type: str
mode:
description:
- File access mode inside the container. Must be an octal number (like C(0644) or C(0444)).
type: int
state:
description:
- C(absent) - A service matching the specified name will be removed and have its tasks stopped.
- C(present) - Asserts the existence of a service matching the name and provided configuration parameters.
Unspecified configuration parameters will be set to docker defaults.
type: str
default: present
choices:
- present
- absent
stop_grace_period:
description:
- Time to wait before force killing a container.
- "Accepts a duration as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--stop-grace-period) option of C(docker service create).
type: str
version_added: "2.8"
stop_signal:
description:
- Override default signal used to stop the container.
- Corresponds to the C(--stop-signal) option of C(docker service create).
type: str
version_added: "2.8"
tty:
description:
- Allocate a pseudo-TTY.
- Corresponds to the C(--tty) option of C(docker service create).
- Requires API version >= 1.25.
type: bool
update_config:
description:
- Configures how the service should be updated. Useful for configuring rolling updates.
suboptions:
parallelism:
description:
- Rolling update parallelism.
- Corresponds to the C(--update-parallelism) option of C(docker service create).
type: int
delay:
description:
- Rolling update delay.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-delay) option of C(docker service create).
type: str
failure_action:
description:
- Action to take in case of container failure.
- Corresponds to the C(--update-failure-action) option of C(docker service create).
- Usage of I(rollback) requires API version >= 1.29.
type: str
choices:
- continue
- pause
- rollback
monitor:
description:
- Time to monitor updated tasks for failures.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-monitor) option of C(docker service create).
- Requires API version >= 1.25.
type: str
max_failure_ratio:
description:
- Fraction of tasks that may fail during an update before the failure action is invoked.
- Corresponds to the C(--update-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.25.
type: float
order:
description:
- Specifies the order of operations when rolling out an updated task.
- Corresponds to the C(--update-order) option of C(docker service create).
- Requires API version >= 1.29.
type: str
type: dict
version_added: "2.8"
update_delay:
description:
- Rolling update delay.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-delay) option of C(docker service create).
- Before Ansible 2.8, the default value for this option was C(10).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.delay) instead.
type: raw
update_parallelism:
description:
- Rolling update parallelism.
- Corresponds to the C(--update-parallelism) option of C(docker service create).
- Before Ansible 2.8, the default value for this option was C(1).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.parallelism) instead.
type: int
update_failure_action:
description:
- Action to take in case of container failure.
- Corresponds to the C(--update-failure-action) option of C(docker service create).
- Usage of I(rollback) requires API version >= 1.29.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.failure_action) instead.
type: str
choices:
- continue
- pause
- rollback
update_monitor:
description:
- Time to monitor updated tasks for failures.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-monitor) option of C(docker service create).
- Requires API version >= 1.25.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.monitor) instead.
type: raw
update_max_failure_ratio:
description:
- Fraction of tasks that may fail during an update before the failure action is invoked.
- Corresponds to the C(--update-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.25.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.max_failure_ratio) instead.
type: float
update_order:
description:
- Specifies the order of operations when rolling out an updated task.
- Corresponds to the C(--update-order) option of C(docker service create).
- Requires API version >= 1.29.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.order) instead.
type: str
choices:
- stop-first
- start-first
user:
description:
- Sets the username or UID used for the specified command.
- Before Ansible 2.8, the default value for this option was C(root).
- The default has been removed so that the user defined in the image is used if no user is specified here.
- Corresponds to the C(--user) option of C(docker service create).
type: str
working_dir:
description:
- Path to the working directory.
- Corresponds to the C(--workdir) option of C(docker service create).
type: str
version_added: "2.8"
extends_documentation_fragment:
- docker
- docker.docker_py_2_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 2.0.2"
- "Docker API >= 1.24"
notes:
- "Images will only resolve to the latest digest when using Docker API >= 1.30 and Docker SDK for Python >= 3.2.0.
When using older versions use C(force_update: true) to trigger the swarm to resolve a new image."
'''
RETURN = '''
swarm_service:
returned: always
type: dict
description:
- Dictionary of variables representing the current state of the service.
Matches the module parameters format.
- Note that facts are not part of registered vars but accessible directly.
- Note that before Ansible 2.7.9, the return variable was documented as C(ansible_swarm_service),
while the module actually returned a variable called C(ansible_docker_service). The variable
was renamed to C(swarm_service) in both code and documentation for Ansible 2.7.9 and Ansible 2.8.0.
In Ansible 2.7.x, the old name C(ansible_docker_service) can still be used.
sample: '{
"args": [
"3600"
],
"command": [
"sleep"
],
"configs": null,
"constraints": [
"node.role == manager",
"engine.labels.operatingsystem == ubuntu 14.04"
],
"container_labels": null,
"dns": null,
"dns_options": null,
"dns_search": null,
"endpoint_mode": null,
"env": [
"ENVVAR1=envvar1",
"ENVVAR2=envvar2"
],
"force_update": null,
"groups": null,
"healthcheck": {
"interval": 90000000000,
"retries": 3,
"start_period": 30000000000,
"test": [
"CMD",
"curl",
"--fail",
"http://nginx.host.com"
],
"timeout": 10000000000
},
"healthcheck_disabled": false,
"hostname": null,
"hosts": null,
"image": "alpine:latest@sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8",
"labels": {
"com.example.department": "Finance",
"com.example.description": "Accounting webapp"
},
"limit_cpu": 0.5,
"limit_memory": 52428800,
"log_driver": "fluentd",
"log_driver_options": {
"fluentd-address": "127.0.0.1:24224",
"fluentd-async-connect": "true",
"tag": "myservice"
},
"mode": "replicated",
"mounts": [
{
"readonly": false,
"source": "/tmp/",
"target": "/remote_tmp/",
"type": "bind",
"labels": null,
"propagation": null,
"no_copy": null,
"driver_config": null,
"tmpfs_size": null,
"tmpfs_mode": null
}
],
"networks": null,
"placement_preferences": [
{
"spread": "node.labels.mylabel"
}
],
"publish": null,
"read_only": null,
"replicas": 1,
"reserve_cpu": 0.25,
"reserve_memory": 20971520,
"restart_policy": "on-failure",
"restart_policy_attempts": 3,
"restart_policy_delay": 5000000000,
"restart_policy_window": 120000000000,
"secrets": null,
"stop_grace_period": null,
"stop_signal": null,
"tty": null,
"update_delay": 10000000000,
"update_failure_action": null,
"update_max_failure_ratio": null,
"update_monitor": null,
"update_order": "stop-first",
"update_parallelism": 2,
"user": null,
"working_dir": null
}'
changes:
returned: always
description:
- List of changed service attributes if a service has been altered, [] otherwise.
type: list
elements: str
sample: ['container_labels', 'replicas']
rebuilt:
returned: always
description:
- True if the service has been recreated (removed and created)
type: bool
sample: True
'''
EXAMPLES = '''
- name: Set command and arguments
docker_swarm_service:
name: myservice
image: alpine
command: sleep
args:
- "3600"
- name: Set a bind mount
docker_swarm_service:
name: myservice
image: alpine
mounts:
- source: /tmp/
target: /remote_tmp/
type: bind
- name: Set service labels
docker_swarm_service:
name: myservice
image: alpine
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
- name: Set environment variables
docker_swarm_service:
name: myservice
image: alpine
env:
ENVVAR1: envvar1
ENVVAR2: envvar2
env_files:
- envs/common.env
- envs/apps/web.env
- name: Set fluentd logging
docker_swarm_service:
name: myservice
image: alpine
logging:
driver: fluentd
options:
fluentd-address: "127.0.0.1:24224"
fluentd-async-connect: "true"
tag: myservice
- name: Set restart policies
docker_swarm_service:
name: myservice
image: alpine
restart_config:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
- name: Set update config
docker_swarm_service:
name: myservice
image: alpine
update_config:
parallelism: 2
delay: 10s
order: stop-first
- name: Set rollback config
docker_swarm_service:
name: myservice
image: alpine
update_config:
failure_action: rollback
rollback_config:
parallelism: 2
delay: 10s
order: stop-first
- name: Set placement preferences
docker_swarm_service:
name: myservice
image: alpine:edge
placement:
preferences:
- spread: node.labels.mylabel
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04
- name: Set configs
docker_swarm_service:
name: myservice
image: alpine:edge
configs:
- config_name: myconfig_name
filename: "/tmp/config.txt"
- name: Set networks
docker_swarm_service:
name: myservice
image: alpine:edge
networks:
- mynetwork
- name: Set networks as a dictionary
docker_swarm_service:
name: myservice
image: alpine:edge
networks:
- name: "mynetwork"
aliases:
- "mynetwork_alias"
options:
foo: bar
- name: Set secrets
docker_swarm_service:
name: myservice
image: alpine:edge
secrets:
- secret_name: mysecret_name
filename: "/run/secrets/secret.txt"
- name: Start service with healthcheck
docker_swarm_service:
name: myservice
image: nginx:1.13
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Configure service resources
docker_swarm_service:
name: myservice
image: alpine:edge
reservations:
cpus: 0.25
memory: 20M
limits:
cpus: 0.50
memory: 50M
- name: Remove service
docker_swarm_service:
name: myservice
state: absent
'''
import shlex
import time
import operator
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
convert_duration_to_nanosecond,
parse_healthcheck,
clean_dict_booleans_for_docker_api,
RequestException,
)
from ansible.module_utils.basic import human_to_bytes
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
try:
from docker import types
from docker.utils import (
parse_repository_tag,
parse_env_file,
format_environment,
)
from docker.errors import (
APIError,
DockerException,
NotFound,
)
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
def get_docker_environment(env, env_files):
"""
Will return a list of "KEY=VALUE" items. Supplied env variable can
be either a list or a dictionary.
If environment files are combined with explicit environment variables,
the explicit environment variables take precedence.
"""
env_dict = {}
if env_files:
for env_file in env_files:
parsed_env_file = parse_env_file(env_file)
for name, value in parsed_env_file.items():
env_dict[name] = str(value)
if env is not None and isinstance(env, string_types):
env = env.split(',')
if env is not None and isinstance(env, dict):
for name, value in env.items():
if not isinstance(value, string_types):
raise ValueError(
'Non-string value found for env option. '
'Ambiguous env options must be wrapped in quotes to avoid YAML parsing. Key: %s' % name
)
env_dict[name] = str(value)
elif env is not None and isinstance(env, list):
for item in env:
try:
name, value = item.split('=', 1)
except ValueError:
raise ValueError('Invalid environment variable found in list, needs to be in format KEY=VALUE.')
env_dict[name] = value
elif env is not None:
raise ValueError(
'Invalid type for env %s (%s). Only list or dict allowed.' % (env, type(env))
)
env_list = format_environment(env_dict)
if not env_list:
if env is not None or env_files is not None:
return []
else:
return None
return sorted(env_list)
def get_docker_networks(networks, network_ids):
"""
Validate a list of network names or a list of network dictionaries.
Network names will be resolved to ids by using the network_ids mapping.
"""
if networks is None:
return None
parsed_networks = []
for network in networks:
if isinstance(network, string_types):
parsed_network = {'name': network}
elif isinstance(network, dict):
if 'name' not in network:
raise TypeError(
'"name" is required when networks are passed as dictionaries.'
)
name = network.pop('name')
parsed_network = {'name': name}
aliases = network.pop('aliases', None)
if aliases is not None:
if not isinstance(aliases, list):
raise TypeError('"aliases" network option is only allowed as a list')
if not all(
isinstance(alias, string_types) for alias in aliases
):
raise TypeError('Only strings are allowed as network aliases.')
parsed_network['aliases'] = aliases
options = network.pop('options', None)
if options is not None:
if not isinstance(options, dict):
raise TypeError('Only dict is allowed as network options.')
parsed_network['options'] = clean_dict_booleans_for_docker_api(options)
# Check if any invalid keys left
if network:
invalid_keys = ', '.join(network.keys())
raise TypeError(
'%s are not valid keys for the networks option' % invalid_keys
)
else:
raise TypeError(
'Only a list of strings or dictionaries are allowed to be passed as networks.'
)
network_name = parsed_network.pop('name')
try:
parsed_network['id'] = network_ids[network_name]
except KeyError as e:
raise ValueError('Could not find a network named: %s.' % e)
parsed_networks.append(parsed_network)
return parsed_networks or []
def get_nanoseconds_from_raw_option(name, value):
if value is None:
return None
elif isinstance(value, int):
return value
elif isinstance(value, string_types):
try:
return int(value)
except ValueError:
return convert_duration_to_nanosecond(value)
else:
raise ValueError(
'Invalid type for %s %s (%s). Only string or int allowed.'
% (name, value, type(value))
)
def get_value(key, values, default=None):
value = values.get(key)
return value if value is not None else default
def has_dict_changed(new_dict, old_dict):
"""
Check if new_dict has differences compared to old_dict while
ignoring keys in old_dict which are None in new_dict.
"""
if new_dict is None:
return False
if not new_dict and old_dict:
return True
if not old_dict and new_dict:
return True
defined_options = dict(
(option, value) for option, value in new_dict.items()
if value is not None
)
for option, value in defined_options.items():
old_value = old_dict.get(option)
if not value and not old_value:
continue
if value != old_value:
return True
return False
def has_list_changed(new_list, old_list, sort_lists=True, sort_key=None):
"""
Check two lists have differences. Sort lists by default.
"""
def sort_list(unsorted_list):
"""
Sort a given list.
The list may contain dictionaries, so use the sort key to handle them.
"""
if unsorted_list and isinstance(unsorted_list[0], dict):
if not sort_key:
raise Exception(
'A sort key was not specified when sorting list'
)
else:
return sorted(unsorted_list, key=lambda k: k[sort_key])
# Either the list is empty or does not contain dictionaries
try:
return sorted(unsorted_list)
except TypeError:
return unsorted_list
if new_list is None:
return False
old_list = old_list or []
if len(new_list) != len(old_list):
return True
if sort_lists:
zip_data = zip(sort_list(new_list), sort_list(old_list))
else:
zip_data = zip(new_list, old_list)
for new_item, old_item in zip_data:
is_same_type = type(new_item) == type(old_item)
if not is_same_type:
if isinstance(new_item, string_types) and isinstance(old_item, string_types):
# Even though the types are different between these items,
# they are both strings. Try matching on the same string type.
try:
new_item_type = type(new_item)
old_item_casted = new_item_type(old_item)
if new_item != old_item_casted:
return True
else:
continue
except UnicodeEncodeError:
# Fallback to assuming the strings are different
return True
else:
return True
if isinstance(new_item, dict):
if has_dict_changed(new_item, old_item):
return True
elif new_item != old_item:
return True
return False
def have_networks_changed(new_networks, old_networks):
"""Special case list checking for networks to sort aliases"""
if new_networks is None:
return False
old_networks = old_networks or []
if len(new_networks) != len(old_networks):
return True
zip_data = zip(
sorted(new_networks, key=lambda k: k['id']),
sorted(old_networks, key=lambda k: k['id'])
)
for new_item, old_item in zip_data:
new_item = dict(new_item)
old_item = dict(old_item)
# Sort the aliases
if 'aliases' in new_item:
new_item['aliases'] = sorted(new_item['aliases'] or [])
if 'aliases' in old_item:
old_item['aliases'] = sorted(old_item['aliases'] or [])
if has_dict_changed(new_item, old_item):
return True
return False
class DockerService(DockerBaseClass):
def __init__(self, docker_api_version, docker_py_version):
super(DockerService, self).__init__()
self.image = ""
self.command = None
self.args = None
self.endpoint_mode = None
self.dns = None
self.healthcheck = None
self.healthcheck_disabled = None
self.hostname = None
self.hosts = None
self.tty = None
self.dns_search = None
self.dns_options = None
self.env = None
self.force_update = None
self.groups = None
self.log_driver = None
self.log_driver_options = None
self.labels = None
self.container_labels = None
self.limit_cpu = None
self.limit_memory = None
self.reserve_cpu = None
self.reserve_memory = None
self.mode = "replicated"
self.user = None
self.mounts = None
self.configs = None
self.secrets = None
self.constraints = None
self.networks = None
self.stop_grace_period = None
self.stop_signal = None
self.publish = None
self.placement_preferences = None
self.replicas = -1
self.service_id = False
self.service_version = False
self.read_only = None
self.restart_policy = None
self.restart_policy_attempts = None
self.restart_policy_delay = None
self.restart_policy_window = None
self.rollback_config = None
self.update_delay = None
self.update_parallelism = None
self.update_failure_action = None
self.update_monitor = None
self.update_max_failure_ratio = None
self.update_order = None
self.working_dir = None
self.docker_api_version = docker_api_version
self.docker_py_version = docker_py_version
def get_facts(self):
return {
'image': self.image,
'mounts': self.mounts,
'configs': self.configs,
'networks': self.networks,
'command': self.command,
'args': self.args,
'tty': self.tty,
'dns': self.dns,
'dns_search': self.dns_search,
'dns_options': self.dns_options,
'healthcheck': self.healthcheck,
'healthcheck_disabled': self.healthcheck_disabled,
'hostname': self.hostname,
'hosts': self.hosts,
'env': self.env,
'force_update': self.force_update,
'groups': self.groups,
'log_driver': self.log_driver,
'log_driver_options': self.log_driver_options,
'publish': self.publish,
'constraints': self.constraints,
'placement_preferences': self.placement_preferences,
'labels': self.labels,
'container_labels': self.container_labels,
'mode': self.mode,
'replicas': self.replicas,
'endpoint_mode': self.endpoint_mode,
'restart_policy': self.restart_policy,
'secrets': self.secrets,
'stop_grace_period': self.stop_grace_period,
'stop_signal': self.stop_signal,
'limit_cpu': self.limit_cpu,
'limit_memory': self.limit_memory,
'read_only': self.read_only,
'reserve_cpu': self.reserve_cpu,
'reserve_memory': self.reserve_memory,
'restart_policy_delay': self.restart_policy_delay,
'restart_policy_attempts': self.restart_policy_attempts,
'restart_policy_window': self.restart_policy_window,
'rollback_config': self.rollback_config,
'update_delay': self.update_delay,
'update_parallelism': self.update_parallelism,
'update_failure_action': self.update_failure_action,
'update_monitor': self.update_monitor,
'update_max_failure_ratio': self.update_max_failure_ratio,
'update_order': self.update_order,
'user': self.user,
'working_dir': self.working_dir,
}
@property
def can_update_networks(self):
# Before Docker API 1.29 adding/removing networks was not supported
return (
self.docker_api_version >= LooseVersion('1.29') and
self.docker_py_version >= LooseVersion('2.7')
)
@property
def can_use_task_template_networks(self):
# In Docker API 1.25 attaching networks to TaskTemplate is preferred over Spec
return (
self.docker_api_version >= LooseVersion('1.25') and
self.docker_py_version >= LooseVersion('2.7')
)
@staticmethod
def get_restart_config_from_ansible_params(params):
restart_config = params['restart_config'] or {}
condition = get_value(
'condition',
restart_config,
default=params['restart_policy']
)
delay = get_value(
'delay',
restart_config,
default=params['restart_policy_delay']
)
delay = get_nanoseconds_from_raw_option(
'restart_policy_delay',
delay
)
max_attempts = get_value(
'max_attempts',
restart_config,
default=params['restart_policy_attempts']
)
window = get_value(
'window',
restart_config,
default=params['restart_policy_window']
)
window = get_nanoseconds_from_raw_option(
'restart_policy_window',
window
)
return {
'restart_policy': condition,
'restart_policy_delay': delay,
'restart_policy_attempts': max_attempts,
'restart_policy_window': window
}
@staticmethod
def get_update_config_from_ansible_params(params):
update_config = params['update_config'] or {}
parallelism = get_value(
'parallelism',
update_config,
default=params['update_parallelism']
)
delay = get_value(
'delay',
update_config,
default=params['update_delay']
)
delay = get_nanoseconds_from_raw_option(
'update_delay',
delay
)
failure_action = get_value(
'failure_action',
update_config,
default=params['update_failure_action']
)
monitor = get_value(
'monitor',
update_config,
default=params['update_monitor']
)
monitor = get_nanoseconds_from_raw_option(
'update_monitor',
monitor
)
max_failure_ratio = get_value(
'max_failure_ratio',
update_config,
default=params['update_max_failure_ratio']
)
order = get_value(
'order',
update_config,
default=params['update_order']
)
return {
'update_parallelism': parallelism,
'update_delay': delay,
'update_failure_action': failure_action,
'update_monitor': monitor,
'update_max_failure_ratio': max_failure_ratio,
'update_order': order
}
@staticmethod
def get_rollback_config_from_ansible_params(params):
if params['rollback_config'] is None:
return None
rollback_config = params['rollback_config'] or {}
delay = get_nanoseconds_from_raw_option(
'rollback_config.delay',
rollback_config.get('delay')
)
monitor = get_nanoseconds_from_raw_option(
'rollback_config.monitor',
rollback_config.get('monitor')
)
return {
'parallelism': rollback_config.get('parallelism'),
'delay': delay,
'failure_action': rollback_config.get('failure_action'),
'monitor': monitor,
'max_failure_ratio': rollback_config.get('max_failure_ratio'),
'order': rollback_config.get('order'),
}
@staticmethod
def get_logging_from_ansible_params(params):
logging_config = params['logging'] or {}
driver = get_value(
'driver',
logging_config,
default=params['log_driver']
)
options = get_value(
'options',
logging_config,
default=params['log_driver_options']
)
return {
'log_driver': driver,
'log_driver_options': options,
}
@staticmethod
def get_limits_from_ansible_params(params):
limits = params['limits'] or {}
cpus = get_value(
'cpus',
limits,
default=params['limit_cpu']
)
memory = get_value(
'memory',
limits,
default=params['limit_memory']
)
if memory is not None:
try:
memory = human_to_bytes(memory)
except ValueError as exc:
raise Exception('Failed to convert limit_memory to bytes: %s' % exc)
return {
'limit_cpu': cpus,
'limit_memory': memory,
}
@staticmethod
def get_reservations_from_ansible_params(params):
reservations = params['reservations'] or {}
cpus = get_value(
'cpus',
reservations,
default=params['reserve_cpu']
)
memory = get_value(
'memory',
reservations,
default=params['reserve_memory']
)
if memory is not None:
try:
memory = human_to_bytes(memory)
except ValueError as exc:
raise Exception('Failed to convert reserve_memory to bytes: %s' % exc)
return {
'reserve_cpu': cpus,
'reserve_memory': memory,
}
@staticmethod
def get_placement_from_ansible_params(params):
placement = params['placement'] or {}
constraints = get_value(
'constraints',
placement,
default=params['constraints']
)
preferences = placement.get('preferences')
return {
'constraints': constraints,
'placement_preferences': preferences,
}
@classmethod
def from_ansible_params(
cls,
ap,
old_service,
image_digest,
secret_ids,
config_ids,
network_ids,
docker_api_version,
docker_py_version,
):
s = DockerService(docker_api_version, docker_py_version)
s.image = image_digest
s.args = ap['args']
s.endpoint_mode = ap['endpoint_mode']
s.dns = ap['dns']
s.dns_search = ap['dns_search']
s.dns_options = ap['dns_options']
s.healthcheck, s.healthcheck_disabled = parse_healthcheck(ap['healthcheck'])
s.hostname = ap['hostname']
s.hosts = ap['hosts']
s.tty = ap['tty']
s.labels = ap['labels']
s.container_labels = ap['container_labels']
s.mode = ap['mode']
s.stop_signal = ap['stop_signal']
s.user = ap['user']
s.working_dir = ap['working_dir']
s.read_only = ap['read_only']
s.networks = get_docker_networks(ap['networks'], network_ids)
s.command = ap['command']
if isinstance(s.command, string_types):
s.command = shlex.split(s.command)
elif isinstance(s.command, list):
invalid_items = [
(index, item)
for index, item in enumerate(s.command)
if not isinstance(item, string_types)
]
if invalid_items:
errors = ', '.join(
[
'%s (%s) at index %s' % (item, type(item), index)
for index, item in invalid_items
]
)
raise Exception(
'All items in a command list need to be strings. '
'Check quoting. Invalid items: %s.'
% errors
)
s.command = ap['command']
elif s.command is not None:
raise ValueError(
'Invalid type for command %s (%s). '
'Only string or list allowed. Check quoting.'
% (s.command, type(s.command))
)
s.env = get_docker_environment(ap['env'], ap['env_files'])
s.rollback_config = cls.get_rollback_config_from_ansible_params(ap)
update_config = cls.get_update_config_from_ansible_params(ap)
for key, value in update_config.items():
setattr(s, key, value)
restart_config = cls.get_restart_config_from_ansible_params(ap)
for key, value in restart_config.items():
setattr(s, key, value)
logging_config = cls.get_logging_from_ansible_params(ap)
for key, value in logging_config.items():
setattr(s, key, value)
limits = cls.get_limits_from_ansible_params(ap)
for key, value in limits.items():
setattr(s, key, value)
reservations = cls.get_reservations_from_ansible_params(ap)
for key, value in reservations.items():
setattr(s, key, value)
placement = cls.get_placement_from_ansible_params(ap)
for key, value in placement.items():
setattr(s, key, value)
if ap['stop_grace_period'] is not None:
s.stop_grace_period = convert_duration_to_nanosecond(ap['stop_grace_period'])
if ap['force_update']:
s.force_update = int(str(time.time()).replace('.', ''))
if ap['groups'] is not None:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
s.groups = [str(g) for g in ap['groups']]
if ap['replicas'] == -1:
if old_service:
s.replicas = old_service.replicas
else:
s.replicas = 1
else:
s.replicas = ap['replicas']
if ap['publish'] is not None:
s.publish = []
for param_p in ap['publish']:
service_p = {}
service_p['protocol'] = param_p['protocol']
service_p['mode'] = param_p['mode']
service_p['published_port'] = param_p['published_port']
service_p['target_port'] = param_p['target_port']
s.publish.append(service_p)
if ap['mounts'] is not None:
s.mounts = []
for param_m in ap['mounts']:
service_m = {}
service_m['readonly'] = param_m['readonly']
service_m['type'] = param_m['type']
if param_m['source'] is None and param_m['type'] != 'tmpfs':
raise ValueError('Source must be specified for mounts which are not of type tmpfs')
service_m['source'] = param_m['source'] or ''
service_m['target'] = param_m['target']
service_m['labels'] = param_m['labels']
service_m['no_copy'] = param_m['no_copy']
service_m['propagation'] = param_m['propagation']
service_m['driver_config'] = param_m['driver_config']
service_m['tmpfs_mode'] = param_m['tmpfs_mode']
tmpfs_size = param_m['tmpfs_size']
if tmpfs_size is not None:
try:
tmpfs_size = human_to_bytes(tmpfs_size)
except ValueError as exc:
raise ValueError(
'Failed to convert tmpfs_size to bytes: %s' % exc
)
service_m['tmpfs_size'] = tmpfs_size
s.mounts.append(service_m)
if ap['configs'] is not None:
s.configs = []
for param_m in ap['configs']:
service_c = {}
config_name = param_m['config_name']
service_c['config_id'] = param_m['config_id'] or config_ids[config_name]
service_c['config_name'] = config_name
service_c['filename'] = param_m['filename'] or config_name
service_c['uid'] = param_m['uid']
service_c['gid'] = param_m['gid']
service_c['mode'] = param_m['mode']
s.configs.append(service_c)
if ap['secrets'] is not None:
s.secrets = []
for param_m in ap['secrets']:
service_s = {}
secret_name = param_m['secret_name']
service_s['secret_id'] = param_m['secret_id'] or secret_ids[secret_name]
service_s['secret_name'] = secret_name
service_s['filename'] = param_m['filename'] or secret_name
service_s['uid'] = param_m['uid']
service_s['gid'] = param_m['gid']
service_s['mode'] = param_m['mode']
s.secrets.append(service_s)
return s
def compare(self, os):
differences = DifferenceTracker()
needs_rebuild = False
force_update = False
if self.endpoint_mode is not None and self.endpoint_mode != os.endpoint_mode:
differences.add('endpoint_mode', parameter=self.endpoint_mode, active=os.endpoint_mode)
if has_list_changed(self.env, os.env):
differences.add('env', parameter=self.env, active=os.env)
if self.log_driver is not None and self.log_driver != os.log_driver:
differences.add('log_driver', parameter=self.log_driver, active=os.log_driver)
if self.log_driver_options is not None and self.log_driver_options != (os.log_driver_options or {}):
differences.add('log_opt', parameter=self.log_driver_options, active=os.log_driver_options)
if self.mode != os.mode:
needs_rebuild = True
differences.add('mode', parameter=self.mode, active=os.mode)
if has_list_changed(self.mounts, os.mounts, sort_key='target'):
differences.add('mounts', parameter=self.mounts, active=os.mounts)
if has_list_changed(self.configs, os.configs, sort_key='config_name'):
differences.add('configs', parameter=self.configs, active=os.configs)
if has_list_changed(self.secrets, os.secrets, sort_key='secret_name'):
differences.add('secrets', parameter=self.secrets, active=os.secrets)
if have_networks_changed(self.networks, os.networks):
differences.add('networks', parameter=self.networks, active=os.networks)
needs_rebuild = not self.can_update_networks
if self.replicas != os.replicas:
differences.add('replicas', parameter=self.replicas, active=os.replicas)
if has_list_changed(self.command, os.command, sort_lists=False):
differences.add('command', parameter=self.command, active=os.command)
if has_list_changed(self.args, os.args, sort_lists=False):
differences.add('args', parameter=self.args, active=os.args)
if has_list_changed(self.constraints, os.constraints):
differences.add('constraints', parameter=self.constraints, active=os.constraints)
if has_list_changed(self.placement_preferences, os.placement_preferences, sort_lists=False):
differences.add('placement_preferences', parameter=self.placement_preferences, active=os.placement_preferences)
if has_list_changed(self.groups, os.groups):
differences.add('groups', parameter=self.groups, active=os.groups)
if self.labels is not None and self.labels != (os.labels or {}):
differences.add('labels', parameter=self.labels, active=os.labels)
if self.limit_cpu is not None and self.limit_cpu != os.limit_cpu:
differences.add('limit_cpu', parameter=self.limit_cpu, active=os.limit_cpu)
if self.limit_memory is not None and self.limit_memory != os.limit_memory:
differences.add('limit_memory', parameter=self.limit_memory, active=os.limit_memory)
if self.reserve_cpu is not None and self.reserve_cpu != os.reserve_cpu:
differences.add('reserve_cpu', parameter=self.reserve_cpu, active=os.reserve_cpu)
if self.reserve_memory is not None and self.reserve_memory != os.reserve_memory:
differences.add('reserve_memory', parameter=self.reserve_memory, active=os.reserve_memory)
if self.container_labels is not None and self.container_labels != (os.container_labels or {}):
differences.add('container_labels', parameter=self.container_labels, active=os.container_labels)
if self.stop_signal is not None and self.stop_signal != os.stop_signal:
differences.add('stop_signal', parameter=self.stop_signal, active=os.stop_signal)
if self.stop_grace_period is not None and self.stop_grace_period != os.stop_grace_period:
differences.add('stop_grace_period', parameter=self.stop_grace_period, active=os.stop_grace_period)
if self.has_publish_changed(os.publish):
differences.add('publish', parameter=self.publish, active=os.publish)
if self.read_only is not None and self.read_only != os.read_only:
differences.add('read_only', parameter=self.read_only, active=os.read_only)
if self.restart_policy is not None and self.restart_policy != os.restart_policy:
differences.add('restart_policy', parameter=self.restart_policy, active=os.restart_policy)
if self.restart_policy_attempts is not None and self.restart_policy_attempts != os.restart_policy_attempts:
differences.add('restart_policy_attempts', parameter=self.restart_policy_attempts, active=os.restart_policy_attempts)
if self.restart_policy_delay is not None and self.restart_policy_delay != os.restart_policy_delay:
differences.add('restart_policy_delay', parameter=self.restart_policy_delay, active=os.restart_policy_delay)
if self.restart_policy_window is not None and self.restart_policy_window != os.restart_policy_window:
differences.add('restart_policy_window', parameter=self.restart_policy_window, active=os.restart_policy_window)
if has_dict_changed(self.rollback_config, os.rollback_config):
differences.add('rollback_config', parameter=self.rollback_config, active=os.rollback_config)
if self.update_delay is not None and self.update_delay != os.update_delay:
differences.add('update_delay', parameter=self.update_delay, active=os.update_delay)
if self.update_parallelism is not None and self.update_parallelism != os.update_parallelism:
differences.add('update_parallelism', parameter=self.update_parallelism, active=os.update_parallelism)
if self.update_failure_action is not None and self.update_failure_action != os.update_failure_action:
differences.add('update_failure_action', parameter=self.update_failure_action, active=os.update_failure_action)
if self.update_monitor is not None and self.update_monitor != os.update_monitor:
differences.add('update_monitor', parameter=self.update_monitor, active=os.update_monitor)
if self.update_max_failure_ratio is not None and self.update_max_failure_ratio != os.update_max_failure_ratio:
differences.add('update_max_failure_ratio', parameter=self.update_max_failure_ratio, active=os.update_max_failure_ratio)
if self.update_order is not None and self.update_order != os.update_order:
differences.add('update_order', parameter=self.update_order, active=os.update_order)
has_image_changed, change = self.has_image_changed(os.image)
if has_image_changed:
differences.add('image', parameter=self.image, active=change)
if self.user and self.user != os.user:
differences.add('user', parameter=self.user, active=os.user)
if has_list_changed(self.dns, os.dns, sort_lists=False):
differences.add('dns', parameter=self.dns, active=os.dns)
if has_list_changed(self.dns_search, os.dns_search, sort_lists=False):
differences.add('dns_search', parameter=self.dns_search, active=os.dns_search)
if has_list_changed(self.dns_options, os.dns_options):
differences.add('dns_options', parameter=self.dns_options, active=os.dns_options)
if self.has_healthcheck_changed(os):
differences.add('healthcheck', parameter=self.healthcheck, active=os.healthcheck)
if self.hostname is not None and self.hostname != os.hostname:
differences.add('hostname', parameter=self.hostname, active=os.hostname)
if self.hosts is not None and self.hosts != (os.hosts or {}):
differences.add('hosts', parameter=self.hosts, active=os.hosts)
if self.tty is not None and self.tty != os.tty:
differences.add('tty', parameter=self.tty, active=os.tty)
if self.working_dir is not None and self.working_dir != os.working_dir:
differences.add('working_dir', parameter=self.working_dir, active=os.working_dir)
if self.force_update:
force_update = True
return not differences.empty or force_update, differences, needs_rebuild, force_update
def has_healthcheck_changed(self, old_publish):
if self.healthcheck_disabled is False and self.healthcheck is None:
return False
if self.healthcheck_disabled and old_publish.healthcheck is None:
return False
return self.healthcheck != old_publish.healthcheck
def has_publish_changed(self, old_publish):
if self.publish is None:
return False
old_publish = old_publish or []
if len(self.publish) != len(old_publish):
return True
publish_sorter = operator.itemgetter('published_port', 'target_port', 'protocol')
publish = sorted(self.publish, key=publish_sorter)
old_publish = sorted(old_publish, key=publish_sorter)
for publish_item, old_publish_item in zip(publish, old_publish):
ignored_keys = set()
if not publish_item.get('mode'):
ignored_keys.add('mode')
# Create copies of publish_item dicts where keys specified in ignored_keys are left out
filtered_old_publish_item = dict(
(k, v) for k, v in old_publish_item.items() if k not in ignored_keys
)
filtered_publish_item = dict(
(k, v) for k, v in publish_item.items() if k not in ignored_keys
)
if filtered_publish_item != filtered_old_publish_item:
return True
return False
def has_image_changed(self, old_image):
if '@' not in self.image:
old_image = old_image.split('@')[0]
return self.image != old_image, old_image
def build_container_spec(self):
mounts = None
if self.mounts is not None:
mounts = []
for mount_config in self.mounts:
mount_options = {
'target': 'target',
'source': 'source',
'type': 'type',
'readonly': 'read_only',
'propagation': 'propagation',
'labels': 'labels',
'no_copy': 'no_copy',
'driver_config': 'driver_config',
'tmpfs_size': 'tmpfs_size',
'tmpfs_mode': 'tmpfs_mode'
}
mount_args = {}
for option, mount_arg in mount_options.items():
value = mount_config.get(option)
if value is not None:
mount_args[mount_arg] = value
mounts.append(types.Mount(**mount_args))
configs = None
if self.configs is not None:
configs = []
for config_config in self.configs:
config_args = {
'config_id': config_config['config_id'],
'config_name': config_config['config_name']
}
filename = config_config.get('filename')
if filename:
config_args['filename'] = filename
uid = config_config.get('uid')
if uid:
config_args['uid'] = uid
gid = config_config.get('gid')
if gid:
config_args['gid'] = gid
mode = config_config.get('mode')
if mode:
config_args['mode'] = mode
configs.append(types.ConfigReference(**config_args))
secrets = None
if self.secrets is not None:
secrets = []
for secret_config in self.secrets:
secret_args = {
'secret_id': secret_config['secret_id'],
'secret_name': secret_config['secret_name']
}
filename = secret_config.get('filename')
if filename:
secret_args['filename'] = filename
uid = secret_config.get('uid')
if uid:
secret_args['uid'] = uid
gid = secret_config.get('gid')
if gid:
secret_args['gid'] = gid
mode = secret_config.get('mode')
if mode:
secret_args['mode'] = mode
secrets.append(types.SecretReference(**secret_args))
dns_config_args = {}
if self.dns is not None:
dns_config_args['nameservers'] = self.dns
if self.dns_search is not None:
dns_config_args['search'] = self.dns_search
if self.dns_options is not None:
dns_config_args['options'] = self.dns_options
dns_config = types.DNSConfig(**dns_config_args) if dns_config_args else None
container_spec_args = {}
if self.command is not None:
container_spec_args['command'] = self.command
if self.args is not None:
container_spec_args['args'] = self.args
if self.env is not None:
container_spec_args['env'] = self.env
if self.user is not None:
container_spec_args['user'] = self.user
if self.container_labels is not None:
container_spec_args['labels'] = self.container_labels
if self.healthcheck is not None:
container_spec_args['healthcheck'] = types.Healthcheck(**self.healthcheck)
if self.hostname is not None:
container_spec_args['hostname'] = self.hostname
if self.hosts is not None:
container_spec_args['hosts'] = self.hosts
if self.read_only is not None:
container_spec_args['read_only'] = self.read_only
if self.stop_grace_period is not None:
container_spec_args['stop_grace_period'] = self.stop_grace_period
if self.stop_signal is not None:
container_spec_args['stop_signal'] = self.stop_signal
if self.tty is not None:
container_spec_args['tty'] = self.tty
if self.groups is not None:
container_spec_args['groups'] = self.groups
if self.working_dir is not None:
container_spec_args['workdir'] = self.working_dir
if secrets is not None:
container_spec_args['secrets'] = secrets
if mounts is not None:
container_spec_args['mounts'] = mounts
if dns_config is not None:
container_spec_args['dns_config'] = dns_config
if configs is not None:
container_spec_args['configs'] = configs
return types.ContainerSpec(self.image, **container_spec_args)
def build_placement(self):
placement_args = {}
if self.constraints is not None:
placement_args['constraints'] = self.constraints
if self.placement_preferences is not None:
placement_args['preferences'] = [
{key.title(): {'SpreadDescriptor': value}}
for preference in self.placement_preferences
for key, value in preference.items()
]
return types.Placement(**placement_args) if placement_args else None
def build_update_config(self):
update_config_args = {}
if self.update_parallelism is not None:
update_config_args['parallelism'] = self.update_parallelism
if self.update_delay is not None:
update_config_args['delay'] = self.update_delay
if self.update_failure_action is not None:
update_config_args['failure_action'] = self.update_failure_action
if self.update_monitor is not None:
update_config_args['monitor'] = self.update_monitor
if self.update_max_failure_ratio is not None:
update_config_args['max_failure_ratio'] = self.update_max_failure_ratio
if self.update_order is not None:
update_config_args['order'] = self.update_order
return types.UpdateConfig(**update_config_args) if update_config_args else None
def build_log_driver(self):
log_driver_args = {}
if self.log_driver is not None:
log_driver_args['name'] = self.log_driver
if self.log_driver_options is not None:
log_driver_args['options'] = self.log_driver_options
return types.DriverConfig(**log_driver_args) if log_driver_args else None
def build_restart_policy(self):
restart_policy_args = {}
if self.restart_policy is not None:
restart_policy_args['condition'] = self.restart_policy
if self.restart_policy_delay is not None:
restart_policy_args['delay'] = self.restart_policy_delay
if self.restart_policy_attempts is not None:
restart_policy_args['max_attempts'] = self.restart_policy_attempts
if self.restart_policy_window is not None:
restart_policy_args['window'] = self.restart_policy_window
return types.RestartPolicy(**restart_policy_args) if restart_policy_args else None
def build_rollback_config(self):
if self.rollback_config is None:
return None
rollback_config_options = [
'parallelism',
'delay',
'failure_action',
'monitor',
'max_failure_ratio',
'order',
]
rollback_config_args = {}
for option in rollback_config_options:
value = self.rollback_config.get(option)
if value is not None:
rollback_config_args[option] = value
return types.RollbackConfig(**rollback_config_args) if rollback_config_args else None
def build_resources(self):
resources_args = {}
if self.limit_cpu is not None:
resources_args['cpu_limit'] = int(self.limit_cpu * 1000000000.0)
if self.limit_memory is not None:
resources_args['mem_limit'] = self.limit_memory
if self.reserve_cpu is not None:
resources_args['cpu_reservation'] = int(self.reserve_cpu * 1000000000.0)
if self.reserve_memory is not None:
resources_args['mem_reservation'] = self.reserve_memory
return types.Resources(**resources_args) if resources_args else None
def build_task_template(self, container_spec, placement=None):
log_driver = self.build_log_driver()
restart_policy = self.build_restart_policy()
resources = self.build_resources()
task_template_args = {}
if placement is not None:
task_template_args['placement'] = placement
if log_driver is not None:
task_template_args['log_driver'] = log_driver
if restart_policy is not None:
task_template_args['restart_policy'] = restart_policy
if resources is not None:
task_template_args['resources'] = resources
if self.force_update:
task_template_args['force_update'] = self.force_update
if self.can_use_task_template_networks:
networks = self.build_networks()
if networks:
task_template_args['networks'] = networks
return types.TaskTemplate(container_spec=container_spec, **task_template_args)
def build_service_mode(self):
if self.mode == 'global':
self.replicas = None
return types.ServiceMode(self.mode, replicas=self.replicas)
def build_networks(self):
networks = None
if self.networks is not None:
networks = []
for network in self.networks:
docker_network = {'Target': network['id']}
if 'aliases' in network:
docker_network['Aliases'] = network['aliases']
if 'options' in network:
docker_network['DriverOpts'] = network['options']
networks.append(docker_network)
return networks
def build_endpoint_spec(self):
endpoint_spec_args = {}
if self.publish is not None:
ports = []
for port in self.publish:
port_spec = {
'Protocol': port['protocol'],
'PublishedPort': port['published_port'],
'TargetPort': port['target_port']
}
if port.get('mode'):
port_spec['PublishMode'] = port['mode']
ports.append(port_spec)
endpoint_spec_args['ports'] = ports
if self.endpoint_mode is not None:
endpoint_spec_args['mode'] = self.endpoint_mode
return types.EndpointSpec(**endpoint_spec_args) if endpoint_spec_args else None
def build_docker_service(self):
container_spec = self.build_container_spec()
placement = self.build_placement()
task_template = self.build_task_template(container_spec, placement)
update_config = self.build_update_config()
rollback_config = self.build_rollback_config()
service_mode = self.build_service_mode()
endpoint_spec = self.build_endpoint_spec()
service = {'task_template': task_template, 'mode': service_mode}
if update_config:
service['update_config'] = update_config
if rollback_config:
service['rollback_config'] = rollback_config
if endpoint_spec:
service['endpoint_spec'] = endpoint_spec
if self.labels:
service['labels'] = self.labels
if not self.can_use_task_template_networks:
networks = self.build_networks()
if networks:
service['networks'] = networks
return service
class DockerServiceManager(object):
def __init__(self, client):
self.client = client
self.retries = 2
self.diff_tracker = None
def get_service(self, name):
try:
raw_data = self.client.inspect_service(name)
except NotFound:
return None
ds = DockerService(self.client.docker_api_version, self.client.docker_py_version)
task_template_data = raw_data['Spec']['TaskTemplate']
ds.image = task_template_data['ContainerSpec']['Image']
ds.user = task_template_data['ContainerSpec'].get('User')
ds.env = task_template_data['ContainerSpec'].get('Env')
ds.command = task_template_data['ContainerSpec'].get('Command')
ds.args = task_template_data['ContainerSpec'].get('Args')
ds.groups = task_template_data['ContainerSpec'].get('Groups')
ds.stop_grace_period = task_template_data['ContainerSpec'].get('StopGracePeriod')
ds.stop_signal = task_template_data['ContainerSpec'].get('StopSignal')
ds.working_dir = task_template_data['ContainerSpec'].get('Dir')
ds.read_only = task_template_data['ContainerSpec'].get('ReadOnly')
healthcheck_data = task_template_data['ContainerSpec'].get('Healthcheck')
if healthcheck_data:
options = {
'Test': 'test',
'Interval': 'interval',
'Timeout': 'timeout',
'StartPeriod': 'start_period',
'Retries': 'retries'
}
healthcheck = dict(
(options[key], value) for key, value in healthcheck_data.items()
if value is not None and key in options
)
ds.healthcheck = healthcheck
update_config_data = raw_data['Spec'].get('UpdateConfig')
if update_config_data:
ds.update_delay = update_config_data.get('Delay')
ds.update_parallelism = update_config_data.get('Parallelism')
ds.update_failure_action = update_config_data.get('FailureAction')
ds.update_monitor = update_config_data.get('Monitor')
ds.update_max_failure_ratio = update_config_data.get('MaxFailureRatio')
ds.update_order = update_config_data.get('Order')
rollback_config_data = raw_data['Spec'].get('RollbackConfig')
if rollback_config_data:
ds.rollback_config = {
'parallelism': rollback_config_data.get('Parallelism'),
'delay': rollback_config_data.get('Delay'),
'failure_action': rollback_config_data.get('FailureAction'),
'monitor': rollback_config_data.get('Monitor'),
'max_failure_ratio': rollback_config_data.get('MaxFailureRatio'),
'order': rollback_config_data.get('Order'),
}
dns_config = task_template_data['ContainerSpec'].get('DNSConfig')
if dns_config:
ds.dns = dns_config.get('Nameservers')
ds.dns_search = dns_config.get('Search')
ds.dns_options = dns_config.get('Options')
ds.hostname = task_template_data['ContainerSpec'].get('Hostname')
hosts = task_template_data['ContainerSpec'].get('Hosts')
if hosts:
hosts = [
list(reversed(host.split(":", 1)))
if ":" in host
else host.split(" ", 1)
for host in hosts
]
ds.hosts = dict((hostname, ip) for ip, hostname in hosts)
ds.tty = task_template_data['ContainerSpec'].get('TTY')
placement = task_template_data.get('Placement')
if placement:
ds.constraints = placement.get('Constraints')
placement_preferences = []
for preference in placement.get('Preferences', []):
placement_preferences.append(
dict(
(key.lower(), value['SpreadDescriptor'])
for key, value in preference.items()
)
)
ds.placement_preferences = placement_preferences or None
restart_policy_data = task_template_data.get('RestartPolicy')
if restart_policy_data:
ds.restart_policy = restart_policy_data.get('Condition')
ds.restart_policy_delay = restart_policy_data.get('Delay')
ds.restart_policy_attempts = restart_policy_data.get('MaxAttempts')
ds.restart_policy_window = restart_policy_data.get('Window')
raw_data_endpoint_spec = raw_data['Spec'].get('EndpointSpec')
if raw_data_endpoint_spec:
ds.endpoint_mode = raw_data_endpoint_spec.get('Mode')
raw_data_ports = raw_data_endpoint_spec.get('Ports')
if raw_data_ports:
ds.publish = []
for port in raw_data_ports:
ds.publish.append({
'protocol': port['Protocol'],
'mode': port.get('PublishMode', None),
'published_port': int(port['PublishedPort']),
'target_port': int(port['TargetPort'])
})
raw_data_limits = task_template_data.get('Resources', {}).get('Limits')
if raw_data_limits:
raw_cpu_limits = raw_data_limits.get('NanoCPUs')
if raw_cpu_limits:
ds.limit_cpu = float(raw_cpu_limits) / 1000000000
raw_memory_limits = raw_data_limits.get('MemoryBytes')
if raw_memory_limits:
ds.limit_memory = int(raw_memory_limits)
raw_data_reservations = task_template_data.get('Resources', {}).get('Reservations')
if raw_data_reservations:
raw_cpu_reservations = raw_data_reservations.get('NanoCPUs')
if raw_cpu_reservations:
ds.reserve_cpu = float(raw_cpu_reservations) / 1000000000
raw_memory_reservations = raw_data_reservations.get('MemoryBytes')
if raw_memory_reservations:
ds.reserve_memory = int(raw_memory_reservations)
ds.labels = raw_data['Spec'].get('Labels')
ds.log_driver = task_template_data.get('LogDriver', {}).get('Name')
ds.log_driver_options = task_template_data.get('LogDriver', {}).get('Options')
ds.container_labels = task_template_data['ContainerSpec'].get('Labels')
mode = raw_data['Spec']['Mode']
if 'Replicated' in mode.keys():
ds.mode = to_text('replicated', encoding='utf-8')
ds.replicas = mode['Replicated']['Replicas']
elif 'Global' in mode.keys():
ds.mode = 'global'
else:
raise Exception('Unknown service mode: %s' % mode)
raw_data_mounts = task_template_data['ContainerSpec'].get('Mounts')
if raw_data_mounts:
ds.mounts = []
for mount_data in raw_data_mounts:
bind_options = mount_data.get('BindOptions', {})
volume_options = mount_data.get('VolumeOptions', {})
tmpfs_options = mount_data.get('TmpfsOptions', {})
driver_config = volume_options.get('DriverConfig', {})
driver_config = dict(
(key.lower(), value) for key, value in driver_config.items()
) or None
ds.mounts.append({
'source': mount_data.get('Source', ''),
'type': mount_data['Type'],
'target': mount_data['Target'],
'readonly': mount_data.get('ReadOnly'),
'propagation': bind_options.get('Propagation'),
'no_copy': volume_options.get('NoCopy'),
'labels': volume_options.get('Labels'),
'driver_config': driver_config,
'tmpfs_mode': tmpfs_options.get('Mode'),
'tmpfs_size': tmpfs_options.get('SizeBytes'),
})
raw_data_configs = task_template_data['ContainerSpec'].get('Configs')
if raw_data_configs:
ds.configs = []
for config_data in raw_data_configs:
ds.configs.append({
'config_id': config_data['ConfigID'],
'config_name': config_data['ConfigName'],
'filename': config_data['File'].get('Name'),
'uid': config_data['File'].get('UID'),
'gid': config_data['File'].get('GID'),
'mode': config_data['File'].get('Mode')
})
raw_data_secrets = task_template_data['ContainerSpec'].get('Secrets')
if raw_data_secrets:
ds.secrets = []
for secret_data in raw_data_secrets:
ds.secrets.append({
'secret_id': secret_data['SecretID'],
'secret_name': secret_data['SecretName'],
'filename': secret_data['File'].get('Name'),
'uid': secret_data['File'].get('UID'),
'gid': secret_data['File'].get('GID'),
'mode': secret_data['File'].get('Mode')
})
raw_networks_data = task_template_data.get('Networks', raw_data['Spec'].get('Networks'))
if raw_networks_data:
ds.networks = []
for network_data in raw_networks_data:
network = {'id': network_data['Target']}
if 'Aliases' in network_data:
network['aliases'] = network_data['Aliases']
if 'DriverOpts' in network_data:
network['options'] = network_data['DriverOpts']
ds.networks.append(network)
ds.service_version = raw_data['Version']['Index']
ds.service_id = raw_data['ID']
return ds
def update_service(self, name, old_service, new_service):
service_data = new_service.build_docker_service()
result = self.client.update_service(
old_service.service_id,
old_service.service_version,
name=name,
**service_data
)
# Prior to Docker SDK 4.0.0 no warnings were returned and will thus be ignored.
# (see https://github.com/docker/docker-py/pull/2272)
self.client.report_warnings(result, ['Warning'])
def create_service(self, name, service):
service_data = service.build_docker_service()
result = self.client.create_service(name=name, **service_data)
self.client.report_warnings(result, ['Warning'])
def remove_service(self, name):
self.client.remove_service(name)
def get_image_digest(self, name, resolve=False):
if (
not name
or not resolve
):
return name
repo, tag = parse_repository_tag(name)
if not tag:
tag = 'latest'
name = repo + ':' + tag
distribution_data = self.client.inspect_distribution(name)
digest = distribution_data['Descriptor']['digest']
return '%s@%s' % (name, digest)
def get_networks_names_ids(self):
return dict(
(network['Name'], network['Id']) for network in self.client.networks()
)
def get_missing_secret_ids(self):
"""
Resolve missing secret ids by looking them up by name
"""
secret_names = [
secret['secret_name']
for secret in self.client.module.params.get('secrets') or []
if secret['secret_id'] is None
]
if not secret_names:
return {}
secrets = self.client.secrets(filters={'name': secret_names})
secrets = dict(
(secret['Spec']['Name'], secret['ID'])
for secret in secrets
if secret['Spec']['Name'] in secret_names
)
for secret_name in secret_names:
if secret_name not in secrets:
self.client.fail(
'Could not find a secret named "%s"' % secret_name
)
return secrets
def get_missing_config_ids(self):
"""
Resolve missing config ids by looking them up by name
"""
config_names = [
config['config_name']
for config in self.client.module.params.get('configs') or []
if config['config_id'] is None
]
if not config_names:
return {}
configs = self.client.configs(filters={'name': config_names})
configs = dict(
(config['Spec']['Name'], config['ID'])
for config in configs
if config['Spec']['Name'] in config_names
)
for config_name in config_names:
if config_name not in configs:
self.client.fail(
'Could not find a config named "%s"' % config_name
)
return configs
def run(self):
self.diff_tracker = DifferenceTracker()
module = self.client.module
image = module.params['image']
try:
image_digest = self.get_image_digest(
name=image,
resolve=module.params['resolve_image']
)
except DockerException as e:
self.client.fail(
'Error looking for an image named %s: %s'
% (image, e)
)
try:
current_service = self.get_service(module.params['name'])
except Exception as e:
self.client.fail(
'Error looking for service named %s: %s'
% (module.params['name'], e)
)
try:
secret_ids = self.get_missing_secret_ids()
config_ids = self.get_missing_config_ids()
network_ids = self.get_networks_names_ids()
new_service = DockerService.from_ansible_params(
module.params,
current_service,
image_digest,
secret_ids,
config_ids,
network_ids,
self.client.docker_api_version,
self.client.docker_py_version
)
except Exception as e:
return self.client.fail(
'Error parsing module parameters: %s' % e
)
changed = False
msg = 'noop'
rebuilt = False
differences = DifferenceTracker()
facts = {}
if current_service:
if module.params['state'] == 'absent':
if not module.check_mode:
self.remove_service(module.params['name'])
msg = 'Service removed'
changed = True
else:
changed, differences, need_rebuild, force_update = new_service.compare(
current_service
)
if changed:
self.diff_tracker.merge(differences)
if need_rebuild:
if not module.check_mode:
self.remove_service(module.params['name'])
self.create_service(
module.params['name'],
new_service
)
msg = 'Service rebuilt'
rebuilt = True
else:
if not module.check_mode:
self.update_service(
module.params['name'],
current_service,
new_service
)
msg = 'Service updated'
rebuilt = False
else:
if force_update:
if not module.check_mode:
self.update_service(
module.params['name'],
current_service,
new_service
)
msg = 'Service forcefully updated'
rebuilt = False
changed = True
else:
msg = 'Service unchanged'
facts = new_service.get_facts()
else:
if module.params['state'] == 'absent':
msg = 'Service absent'
else:
if not module.check_mode:
self.create_service(module.params['name'], new_service)
msg = 'Service created'
changed = True
facts = new_service.get_facts()
return msg, changed, rebuilt, differences.get_legacy_docker_diffs(), facts
def run_safe(self):
while True:
try:
return self.run()
except APIError as e:
# Sometimes Version.Index will have changed between an inspect and
# update. If this is encountered we'll retry the update.
if self.retries > 0 and 'update out of sequence' in str(e.explanation):
self.retries -= 1
time.sleep(1)
else:
raise
def _detect_publish_mode_usage(client):
for publish_def in client.module.params['publish'] or []:
if publish_def.get('mode'):
return True
return False
def _detect_healthcheck_start_period(client):
if client.module.params['healthcheck']:
return client.module.params['healthcheck']['start_period'] is not None
return False
def _detect_mount_tmpfs_usage(client):
for mount in client.module.params['mounts'] or []:
if mount.get('type') == 'tmpfs':
return True
if mount.get('tmpfs_size') is not None:
return True
if mount.get('tmpfs_mode') is not None:
return True
return False
def _detect_update_config_failure_action_rollback(client):
rollback_config_failure_action = (
(client.module.params['update_config'] or {}).get('failure_action')
)
update_failure_action = client.module.params['update_failure_action']
failure_action = rollback_config_failure_action or update_failure_action
return failure_action == 'rollback'
def main():
argument_spec = dict(
name=dict(type='str', required=True),
image=dict(type='str'),
state=dict(type='str', default='present', choices=['present', 'absent']),
mounts=dict(type='list', elements='dict', options=dict(
source=dict(type='str'),
target=dict(type='str', required=True),
type=dict(
type='str',
default='bind',
choices=['bind', 'volume', 'tmpfs', 'npipe'],
),
readonly=dict(type='bool'),
labels=dict(type='dict'),
propagation=dict(
type='str',
choices=[
'shared',
'slave',
'private',
'rshared',
'rslave',
'rprivate'
]
),
no_copy=dict(type='bool'),
driver_config=dict(type='dict', options=dict(
name=dict(type='str'),
options=dict(type='dict')
)),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='int')
)),
configs=dict(type='list', elements='dict', options=dict(
config_id=dict(type='str'),
config_name=dict(type='str', required=True),
filename=dict(type='str'),
uid=dict(type='str'),
gid=dict(type='str'),
mode=dict(type='int'),
)),
secrets=dict(type='list', elements='dict', options=dict(
secret_id=dict(type='str'),
secret_name=dict(type='str', required=True),
filename=dict(type='str'),
uid=dict(type='str'),
gid=dict(type='str'),
mode=dict(type='int'),
)),
networks=dict(type='list', elements='raw'),
command=dict(type='raw'),
args=dict(type='list', elements='str'),
env=dict(type='raw'),
env_files=dict(type='list', elements='path'),
force_update=dict(type='bool', default=False),
groups=dict(type='list', elements='str'),
logging=dict(type='dict', options=dict(
driver=dict(type='str'),
options=dict(type='dict'),
)),
log_driver=dict(type='str', removed_in_version='2.12'),
log_driver_options=dict(type='dict', removed_in_version='2.12'),
publish=dict(type='list', elements='dict', options=dict(
published_port=dict(type='int', required=True),
target_port=dict(type='int', required=True),
protocol=dict(type='str', default='tcp', choices=['tcp', 'udp']),
mode=dict(type='str', choices=['ingress', 'host']),
)),
placement=dict(type='dict', options=dict(
constraints=dict(type='list', elements='str'),
preferences=dict(type='list', elements='dict'),
)),
constraints=dict(type='list', elements='str', removed_in_version='2.12'),
tty=dict(type='bool'),
dns=dict(type='list', elements='str'),
dns_search=dict(type='list', elements='str'),
dns_options=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
hosts=dict(type='dict'),
labels=dict(type='dict'),
container_labels=dict(type='dict'),
mode=dict(
type='str',
default='replicated',
choices=['replicated', 'global']
),
replicas=dict(type='int', default=-1),
endpoint_mode=dict(type='str', choices=['vip', 'dnsrr']),
stop_grace_period=dict(type='str'),
stop_signal=dict(type='str'),
limits=dict(type='dict', options=dict(
cpus=dict(type='float'),
memory=dict(type='str'),
)),
limit_cpu=dict(type='float', removed_in_version='2.12'),
limit_memory=dict(type='str', removed_in_version='2.12'),
read_only=dict(type='bool'),
reservations=dict(type='dict', options=dict(
cpus=dict(type='float'),
memory=dict(type='str'),
)),
reserve_cpu=dict(type='float', removed_in_version='2.12'),
reserve_memory=dict(type='str', removed_in_version='2.12'),
resolve_image=dict(type='bool', default=False),
restart_config=dict(type='dict', options=dict(
condition=dict(type='str', choices=['none', 'on-failure', 'any']),
delay=dict(type='str'),
max_attempts=dict(type='int'),
window=dict(type='str'),
)),
restart_policy=dict(
type='str',
choices=['none', 'on-failure', 'any'],
removed_in_version='2.12'
),
restart_policy_delay=dict(type='raw', removed_in_version='2.12'),
restart_policy_attempts=dict(type='int', removed_in_version='2.12'),
restart_policy_window=dict(type='raw', removed_in_version='2.12'),
rollback_config=dict(type='dict', options=dict(
parallelism=dict(type='int'),
delay=dict(type='str'),
failure_action=dict(
type='str',
choices=['continue', 'pause']
),
monitor=dict(type='str'),
max_failure_ratio=dict(type='float'),
order=dict(type='str'),
)),
update_config=dict(type='dict', options=dict(
parallelism=dict(type='int'),
delay=dict(type='str'),
failure_action=dict(
type='str',
choices=['continue', 'pause', 'rollback']
),
monitor=dict(type='str'),
max_failure_ratio=dict(type='float'),
order=dict(type='str'),
)),
update_delay=dict(type='raw', removed_in_version='2.12'),
update_parallelism=dict(type='int', removed_in_version='2.12'),
update_failure_action=dict(
type='str',
choices=['continue', 'pause', 'rollback'],
removed_in_version='2.12'
),
update_monitor=dict(type='raw', removed_in_version='2.12'),
update_max_failure_ratio=dict(type='float', removed_in_version='2.12'),
update_order=dict(
type='str',
choices=['stop-first', 'start-first'],
removed_in_version='2.12'
),
user=dict(type='str'),
working_dir=dict(type='str'),
)
option_minimal_versions = dict(
constraints=dict(docker_py_version='2.4.0'),
dns=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
dns_options=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
dns_search=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
endpoint_mode=dict(docker_py_version='3.0.0', docker_api_version='1.25'),
force_update=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
hostname=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
hosts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
groups=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
tty=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
secrets=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
configs=dict(docker_py_version='2.6.0', docker_api_version='1.30'),
update_max_failure_ratio=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
update_monitor=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
update_order=dict(docker_py_version='2.7.0', docker_api_version='1.29'),
stop_signal=dict(docker_py_version='2.6.0', docker_api_version='1.28'),
publish=dict(docker_py_version='3.0.0', docker_api_version='1.25'),
read_only=dict(docker_py_version='2.6.0', docker_api_version='1.28'),
resolve_image=dict(docker_api_version='1.30', docker_py_version='3.2.0'),
rollback_config=dict(docker_py_version='3.5.0', docker_api_version='1.28'),
# specials
publish_mode=dict(
docker_py_version='3.0.0',
docker_api_version='1.25',
detect_usage=_detect_publish_mode_usage,
usage_msg='set publish.mode'
),
healthcheck_start_period=dict(
docker_py_version='2.6.0',
docker_api_version='1.29',
detect_usage=_detect_healthcheck_start_period,
usage_msg='set healthcheck.start_period'
),
update_config_max_failure_ratio=dict(
docker_py_version='2.1.0',
docker_api_version='1.25',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'max_failure_ratio'
) is not None,
usage_msg='set update_config.max_failure_ratio'
),
update_config_failure_action=dict(
docker_py_version='3.5.0',
docker_api_version='1.28',
detect_usage=_detect_update_config_failure_action_rollback,
usage_msg='set update_config.failure_action.rollback'
),
update_config_monitor=dict(
docker_py_version='2.1.0',
docker_api_version='1.25',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'monitor'
) is not None,
usage_msg='set update_config.monitor'
),
update_config_order=dict(
docker_py_version='2.7.0',
docker_api_version='1.29',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'order'
) is not None,
usage_msg='set update_config.order'
),
placement_config_preferences=dict(
docker_py_version='2.4.0',
docker_api_version='1.27',
detect_usage=lambda c: (c.module.params['placement'] or {}).get(
'preferences'
) is not None,
usage_msg='set placement.preferences'
),
placement_config_constraints=dict(
docker_py_version='2.4.0',
detect_usage=lambda c: (c.module.params['placement'] or {}).get(
'constraints'
) is not None,
usage_msg='set placement.constraints'
),
mounts_tmpfs=dict(
docker_py_version='2.6.0',
detect_usage=_detect_mount_tmpfs_usage,
usage_msg='set mounts.tmpfs'
),
rollback_config_order=dict(
docker_api_version='1.29',
detect_usage=lambda c: (c.module.params['rollback_config'] or {}).get(
'order'
) is not None,
usage_msg='set rollback_config.order'
),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_version='2.0.2',
min_docker_api_version='1.24',
option_minimal_versions=option_minimal_versions,
)
try:
dsm = DockerServiceManager(client)
msg, changed, rebuilt, changes, facts = dsm.run_safe()
results = dict(
msg=msg,
changed=changed,
rebuilt=rebuilt,
changes=changes,
swarm_service=facts,
)
if client.module._diff:
before, after = dsm.diff_tracker.get_before_after()
results['diff'] = dict(before=before, after=after)
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,225 |
docker_container does not allow host port ranges binded to a single container port
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using docker_container module, attempting to publish a port, using a range of ports from the host, does not work as expected. Instead, Ansible will attempt to use the port number in the range.
Example:
Trying to get a similar docker command as the following running in Ansible.
```
docker run -p 80-85:80 -d linuxserver/nginx
```
As of now, this will cause an error stating `Bind for 0.0.0.0:80 failed: port is already allocated"`
This is a supported command argument in Docker.
Ref: https://docs.docker.com/network/links/
> Instead, you may specify a range of host ports to bind a container port to that is different than the default ephemeral port range:
>
> `$ docker run -d -p 8000-9000:5000 training/webapp python app.py`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
(venv) ➜ cloudbox git:(develop) ✗ ansible --version
ansible 2.10.0.dev0
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
```
cloudbox git:(develop) ✗ ansible --version
ansible 2.9.2
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/srv/git/cloudbox/ansible.cfg) = [u'profile_tasks']
DEFAULT_FACT_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/ansible_facts.d
DEFAULT_HASH_BEHAVIOUR(/srv/git/cloudbox/ansible.cfg) = merge
DEFAULT_HOST_LIST(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/inventories/local']
DEFAULT_LOG_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/cloudbox.log
DEFAULT_ROLES_PATH(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/roles', u'/srv/git/cloudbox/resources/roles']
DEFAULT_VAULT_PASSWORD_FILE(/srv/git/cloudbox/ansible.cfg) = /etc/ansible/.ansible_vault
RETRY_FILES_ENABLED(/srv/git/cloudbox/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 LTS. Bare-metal setup
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Create and start container
docker_container:
name: nginx
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx2
purge_networks: yes
restart_policy: unless-stopped
state: started
- name: Create and start container
docker_container:
name: nginx2
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx3
purge_networks: yes
restart_policy: unless-stopped
state: started
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Containers nginx2 and nginx3 to be created and binded to whatever host port was available between 80 and 85.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
https://pastebin.com/MxMrGaHm
Both containers are being binded to port 80 on the host.
|
https://github.com/ansible/ansible/issues/66225
|
https://github.com/ansible/ansible/pull/66382
|
21ae66db2ecea3fef21b9b73b5e890809d58631e
|
23b2bb4f4dc68ffa385e74b5d5c304f461887965
| 2020-01-06T21:26:54Z |
python
| 2020-02-03T22:27:40Z |
changelogs/fragments/66382-docker_container-port-range.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,225 |
docker_container does not allow host port ranges binded to a single container port
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using docker_container module, attempting to publish a port, using a range of ports from the host, does not work as expected. Instead, Ansible will attempt to use the port number in the range.
Example:
Trying to get a similar docker command as the following running in Ansible.
```
docker run -p 80-85:80 -d linuxserver/nginx
```
As of now, this will cause an error stating `Bind for 0.0.0.0:80 failed: port is already allocated"`
This is a supported command argument in Docker.
Ref: https://docs.docker.com/network/links/
> Instead, you may specify a range of host ports to bind a container port to that is different than the default ephemeral port range:
>
> `$ docker run -d -p 8000-9000:5000 training/webapp python app.py`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
(venv) ➜ cloudbox git:(develop) ✗ ansible --version
ansible 2.10.0.dev0
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
```
cloudbox git:(develop) ✗ ansible --version
ansible 2.9.2
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/srv/git/cloudbox/ansible.cfg) = [u'profile_tasks']
DEFAULT_FACT_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/ansible_facts.d
DEFAULT_HASH_BEHAVIOUR(/srv/git/cloudbox/ansible.cfg) = merge
DEFAULT_HOST_LIST(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/inventories/local']
DEFAULT_LOG_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/cloudbox.log
DEFAULT_ROLES_PATH(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/roles', u'/srv/git/cloudbox/resources/roles']
DEFAULT_VAULT_PASSWORD_FILE(/srv/git/cloudbox/ansible.cfg) = /etc/ansible/.ansible_vault
RETRY_FILES_ENABLED(/srv/git/cloudbox/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 LTS. Bare-metal setup
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Create and start container
docker_container:
name: nginx
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx2
purge_networks: yes
restart_policy: unless-stopped
state: started
- name: Create and start container
docker_container:
name: nginx2
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx3
purge_networks: yes
restart_policy: unless-stopped
state: started
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Containers nginx2 and nginx3 to be created and binded to whatever host port was available between 80 and 85.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
https://pastebin.com/MxMrGaHm
Both containers are being binded to port 80 on the host.
|
https://github.com/ansible/ansible/issues/66225
|
https://github.com/ansible/ansible/pull/66382
|
21ae66db2ecea3fef21b9b73b5e890809d58631e
|
23b2bb4f4dc68ffa385e74b5d5c304f461887965
| 2020-01-06T21:26:54Z |
python
| 2020-02-03T22:27:40Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,225 |
docker_container does not allow host port ranges binded to a single container port
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using docker_container module, attempting to publish a port, using a range of ports from the host, does not work as expected. Instead, Ansible will attempt to use the port number in the range.
Example:
Trying to get a similar docker command as the following running in Ansible.
```
docker run -p 80-85:80 -d linuxserver/nginx
```
As of now, this will cause an error stating `Bind for 0.0.0.0:80 failed: port is already allocated"`
This is a supported command argument in Docker.
Ref: https://docs.docker.com/network/links/
> Instead, you may specify a range of host ports to bind a container port to that is different than the default ephemeral port range:
>
> `$ docker run -d -p 8000-9000:5000 training/webapp python app.py`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
(venv) ➜ cloudbox git:(develop) ✗ ansible --version
ansible 2.10.0.dev0
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
```
cloudbox git:(develop) ✗ ansible --version
ansible 2.9.2
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/srv/git/cloudbox/ansible.cfg) = [u'profile_tasks']
DEFAULT_FACT_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/ansible_facts.d
DEFAULT_HASH_BEHAVIOUR(/srv/git/cloudbox/ansible.cfg) = merge
DEFAULT_HOST_LIST(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/inventories/local']
DEFAULT_LOG_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/cloudbox.log
DEFAULT_ROLES_PATH(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/roles', u'/srv/git/cloudbox/resources/roles']
DEFAULT_VAULT_PASSWORD_FILE(/srv/git/cloudbox/ansible.cfg) = /etc/ansible/.ansible_vault
RETRY_FILES_ENABLED(/srv/git/cloudbox/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 LTS. Bare-metal setup
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Create and start container
docker_container:
name: nginx
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx2
purge_networks: yes
restart_policy: unless-stopped
state: started
- name: Create and start container
docker_container:
name: nginx2
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx3
purge_networks: yes
restart_policy: unless-stopped
state: started
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Containers nginx2 and nginx3 to be created and binded to whatever host port was available between 80 and 85.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
https://pastebin.com/MxMrGaHm
Both containers are being binded to port 80 on the host.
|
https://github.com/ansible/ansible/issues/66225
|
https://github.com/ansible/ansible/pull/66382
|
21ae66db2ecea3fef21b9b73b5e890809d58631e
|
23b2bb4f4dc68ffa385e74b5d5c304f461887965
| 2020-01-06T21:26:54Z |
python
| 2020-02-03T22:27:40Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with C(--check) and C(--diff) to view config difference and list of actions to be taken.
version_added: "2.1"
notes:
- For most config changes, the container needs to be recreated, i.e. the existing container has to be destroyed and
a new one created. This can cause unexpected data loss and downtime. You can use the I(comparisons) option to
prevent this.
- If the module needs to recreate the container, it will only use the options provided to the module to create the
new container (except I(image)). Therefore, always specify *all* options relevant to the container.
- When I(restart) is set to C(true), the module will only restart the container if no config changes are detected.
Please note that several options have default values; if the container to be restarted uses different values for
these options, it will be recreated instead. The options with default values which can cause this are I(auto_remove),
I(detach), I(init), I(interactive), I(memory), I(paused), I(privileged), I(read_only) and I(tty). This behavior
can be changed by setting I(container_default_behavior) to C(no_defaults), which will be the default value from
Ansible 2.14 on.
options:
auto_remove:
description:
- Enable auto-removal of the container on daemon side when the container's process exits.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
elements: str
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
elements: str
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts. A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not.
- Only options which correspond to the state of a container as handled by the
Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to *all* comparisons which are not explicitly set to other values.
- See the examples for details.
type: dict
version_added: "2.8"
container_default_behavior:
description:
- Various module options used to have default values. This causes problems with
containers which use different values for these options.
- The default value is C(compatibility), which will ensure that the default values
are used when the values are not explicitly specified by the user.
- From Ansible 2.14 on, the default value will switch to C(no_defaults). To avoid
deprecation warnings, please set I(container_default_behavior) to an explicit
value.
- This affects the I(auto_remove), I(detach), I(init), I(interactive), I(memory),
I(paused), I(privileged), I(read_only) and I(tty) options.
type: str
choices:
- compatibility
- no_defaults
version_added: "2.10"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period.
- See I(cpus) for an easier to use alternative.
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota.
- See I(cpus) for an easier to use alternative.
type: int
cpus:
description:
- Specify how much of the available CPU resources a container can use.
- A value of C(1.5) means that at most one and a half CPU (core) will be used.
type: float
version_added: '2.10'
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1).
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
- If disabled, the task will reflect the status of the container run (failed if the command failed).
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(yes).
type: bool
devices:
description:
- List of host device bindings to add to the container.
- "Each binding is a mapping expressed in the format C(<path_on_host>:<path_in_container>:<cgroup_permissions>)."
type: list
elements: str
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit in format C(<number>[<unit>])."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
elements: dict
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- List of DNS options.
type: list
elements: str
dns_servers:
description:
- List of custom DNS servers.
type: list
elements: str
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
elements: str
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in I(env), then the I(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default C(ENTRYPOINT) of the image.
type: list
elements: str
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's C(/etc/hosts) file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
- If the port is already exposed using C(EXPOSE) in a Dockerfile, it does not
need to be exposed again.
type: list
elements: str
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
elements: str
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
- "See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work."
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
- The default used by the Docker daemon is C(30s).
type: str
timeout:
description:
- Maximum time to allow one check to run.
- The default used by the Docker daemon is C(30s).
type: str
retries:
description:
- Consecutive number of failures needed to report unhealthy.
- The default used by the Docker daemon is C(3).
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
- The default used by the Docker daemon is C(0s).
type: str
version_added: "2.8"
hostname:
description:
- The container's hostname.
type: str
ignore_image:
description:
- When I(state) is C(present) or C(started), the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If the image
version in the registry does not match the container, the container will be recreated. You can
stop this behavior by setting I(ignore_image) to C(True).
- "*Warning:* This option is ignored if C(image: ignore) or C(*: ignore) is specified in the
I(comparisons) option."
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The I(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
- This option requires Docker API >= 1.25.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ipc_mode:
description:
- Set the IPC mode for the container.
- Can be one of C(container:<name|id>) to reuse another container's IPC namespace or C(host) to use
the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
elements: str
log_driver:
description:
- Specify the logging driver. Docker uses C(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen I(log_driver).
- See U(https://docs.docker.com/engine/admin/logging/overview/) for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33).
type: str
memory:
description:
- "Memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C("0").
type: str
memory_reservation:
description:
- "Memory soft limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap) in format C(<number>[<unit>]).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited
from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
elements: dict
description:
- Specification for mounts to be added to the container. More powerful alternative to I(volumes).
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- bind
- npipe
- tmpfs
- volume
default: volume
read_only:
description:
- Whether the mount should be read-only.
type: bool
consistency:
description:
- The consistency requirement for the mount.
type: str
choices:
- cached
- consistent
- default
- delegated
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- private
- rprivate
- shared
- rshared
- slave
- rslave
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See
L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes in format <number>[<unit>]."
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are C(bridge), C(host), C(none), C(container:<name|id>), C(<network_name>) or C(default).
- "*Note* that from Ansible 2.14 on, if I(networks_cli_compatible) is C(true) and I(networks) contains at least one network,
the default value for I(network_mode) will be the name of the first network in the I(networks) list. You can prevent this
by explicitly specifying a value for I(network_mode), like the default value C(default) which will be used by Docker if
I(network_mode) is not specified."
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value are C(host) and the empty string.
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the I(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if I(networks) is specified. You need to explicitly use I(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in I(networks)).
Alternatively, use the I(networks_cli_compatible) option, which will be enabled by default from Ansible 2.12 on.
type: list
elements: dict
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
elements: str
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
elements: str
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with I(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will *not* add the default network if I(networks) is
specified. If I(networks) is not specified, the default network will be attached."
- "*Note* that docker CLI also sets I(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set I(network_mode) to the name of the first network you're
adding. This behavior will change for Ansible 2.14: then I(network_mode) will
automatically be set to the first network name in I(networks) if I(network_mode)
is not specified, I(networks) has at least one entry and I(networks_cli_compatible)
is C(true)."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed.
- Only effective when I(log_driver) is set to C(json-file) or C(journald).
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports C(host). Newer versions of the
Docker SDK for Python (docker) allow all values supported by the Docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set C(-1) for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are *not* allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If I(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter C(com.docker.network.bridge.host_binding_ipv4).
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by C(com.docker.network.bridge.host_binding_ipv4).
Note that the first bridge network with a C(com.docker.network.bridge.host_binding_ipv4)
value encountered in the list of I(networks) is the one that will be used.
type: list
elements: str
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- "*Note:* images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled."
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in I(networks) parameter.
- Any default networks such as C(bridge), if not found in I(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
removal_wait_timeout:
description:
- When removing an existing container, the docker daemon API call exists after the container
is scheduled for removal. Removal usually is very fast, but it can happen that during high I/O
load, removal can take longer. By default, the module will wait until the container has been
removed before trying to (re-)create it, however long this takes.
- By setting this option, the module will wait at most this many seconds for the container to be
removed. If the container is still in the removal phase after this many seconds, the module will
fail.
type: float
version_added: "2.10"
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy.
- Place quotes around C(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) in format C(<number>[<unit>]). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, Docker daemon uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User").
type: list
elements: str
state:
description:
- 'C(absent) - A container matching the specified name will be stopped and removed. Use I(force_kill) to kill the container
rather than stopping it. Use I(keep_volumes) to retain volumes associated with the removed container.'
- 'C(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config.'
- 'C(started) - Asserts that the container is first C(present), and then if the container is not running moves it to a running
state. Use I(restart) to force a matching container to be stopped and restarted.'
- 'C(stopped) - Asserts that the container is first C(present), and then if the container is running moves it to a stopped
state.'
- To control what will be taken into account when comparing configuration, see the I(comparisons) option. To avoid that the
image version will be taken into account, you can also use the I(ignore_image) option.
- Use the I(recreate) option to always force re-creation of a matching container, even if it is running.
- If the container should be killed instead of stopped in case it needs to be stopped for recreation, or because I(state) is
C(stopped), please use the I(force_kill) option. Use I(keep_volumes) to retain volumes associated with a removed container.
- Use I(keep_volumes) to retain volumes associated with a removed container.
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending C(SIGKILL).
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
- The option has never been used by the module. It will be removed in Ansible 2.14.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory.
type: list
elements: str
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
- If I(container_default_behavior) is set to C(compatiblity) (the default value), this
option has a default of C(no).
type: bool
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)."
type: list
elements: str
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be of the forms C(user), C(user:group), C(uid), C(uid:gid), C(user:gid) or C(uid:group)."
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
elements: str
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or IDs to get volumes from.
type: list
elements: str
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if I(state) is C(absent)
- If I(detached) is C(false), will include C(Output) attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from time import sleep
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpus = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.removal_wait_timeout = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.cpus is not None:
self.cpus = int(round(self.cpus * 1E9))
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
restart_policy='restart_policy',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if key == 'restart_policy' and self.client.option_minimal_versions[value]['supported']:
restart_policy = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
result[key] = restart_policy
elif self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.disable_healthcheck:
# Make sure image's health check is overridden
result['healthcheck'] = {'test': ['NONE']}
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(parts) == 2:
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
# Only pass anonymous volumes to create container
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
continue
if len(parts) == 2:
if not is_volume_permissions(parts[1]):
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
nano_cpus='cpus',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (parts + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def removing(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Status') == 'removing'
return False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
cpus=host_config.get('NanoCpus'),
)
# Options which don't make sense without their accompanying option
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
restart_policy = host_config.get('RestartPolicy', dict())
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
restart_policy=restart_policy.get('Name')
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
restart_policy = host_config.get('RestartPolicy') or dict()
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
restart_policy=restart_policy.get('Name')
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
network_info = connected_networks.get(network['name'])
if network_info is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
network_info_ipam = network_info.get('IPAMConfig') or {}
if network.get('ipv4_address') and network['ipv4_address'] != network_info_ipam.get('IPv4Address'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != network_info_ipam.get('IPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], network_info.get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, network_info.get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=network_info_ipam.get('IPv4Address'),
ipv6_address=network_info_ipam.get('IPv6Address'),
aliases=network_info.get('Aliases'),
links=network_info.get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = parts + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
# We only expect anonymous volumes to show up in the list
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
continue
if len(parts) == 2:
if not is_volume_permissions(parts[1]):
continue
expected_vols[vol] = dict()
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def wait_for_state(self, container_id, complete_states=None, wait_states=None, accept_removal=False, max_wait=None):
delay = 1.0
total_wait = 0
while True:
# Inspect container
result = self.client.get_container_by_id(container_id)
if result is None:
if accept_removal:
return
msg = 'Encontered vanished container while waiting for container "{0}"'
self.fail(msg.format(container_id))
# Check container state
state = result.get('State', {}).get('Status')
if complete_states is not None and state in complete_states:
return
if wait_states is not None and state not in wait_states:
msg = 'Encontered unexpected state "{1}" while waiting for container "{0}"'
self.fail(msg.format(container_id, state))
# Wait
if max_wait is not None:
if total_wait > max_wait:
msg = 'Timeout of {1} seconds exceeded while waiting for container "{0}"'
self.fail(msg.format(container_id, max_wait))
if total_wait + delay > max_wait:
delay = max_wait - total_wait
sleep(delay)
total_wait += delay
# Exponential backoff, but never wait longer than 10 seconds
# (1.1**24 < 10, 1.1**25 > 10, so it will take 25 iterations
# until the maximal 10 seconds delay is reached. By then, the
# code will have slept for ~1.5 minutes.)
delay = min(delay * 1.1, 10)
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists or container.removing:
# New container
if container.removing:
self.log('Found container in removal phase')
else:
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
if container.removing and not self.check_mode:
# Wait for container to be removed before trying to create it
self.wait_for_state(
container.Id, wait_states=['removing'], accept_removal=True, max_wait=self.parameters.removal_wait_timeout)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
if not self.check_mode:
self.wait_for_state(
container.Id, wait_states=['removing'], accept_removal=True, max_wait=self.parameters.removal_wait_timeout)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and self.parameters.paused is not None and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not image or self.parameters.pull:
if not self.check_mode:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
elif not image:
# If the image isn't there, claim we'll pull.
# (Implicitly: if the image is there, claim it already was latest.)
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if self.parameters.detach is False:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused', 'removal_wait_timeout'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
cpus=dict(docker_py_version='2.3.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', docker_api_version='1.22',
detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
if self.module.params['container_default_behavior'] is None:
self.module.params['container_default_behavior'] = 'compatibility'
self.module.deprecate(
'The container_default_behavior option will change its default value from "compatibility" to '
'"no_defaults" in Ansible 2.14. To remove this warning, please specify an explicit value for it now',
version='2.14'
)
if self.module.params['container_default_behavior'] == 'compatibility':
old_default_values = dict(
auto_remove=False,
detach=True,
init=False,
interactive=False,
memory="0",
paused=False,
privileged=False,
read_only=False,
tty=False,
)
for param, value in old_default_values.items():
if self.module.params[param] is None:
self.module.params[param] = value
def main():
argument_spec = dict(
auto_remove=dict(type='bool'),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
container_default_behavior=dict(type='str', choices=['compatibility', 'no_defaults']),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpus=dict(type='float'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool'),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool'),
interactive=dict(type='bool'),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool'),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool'),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool'),
recreate=dict(type='bool', default=False),
removal_wait_timeout=dict(type='float'),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False, removed_in_version='2.14'),
tty=dict(type='bool'),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
if client.module.params['networks_cli_compatible'] is True and client.module.params['networks'] and client.module.params['network_mode'] is None:
client.module.deprecate(
'Please note that the default value for `network_mode` will change from not specified '
'(which is equal to `default`) to the name of the first network in `networks` if '
'`networks` has at least one entry and `networks_cli_compatible` is `true`. You can '
'change the behavior now by explicitly setting `network_mode` to the name of the first '
'network in `networks`, and remove this warning by setting `network_mode` to `default`. '
'Please make sure that the value you set to `network_mode` equals the inspection result '
'for existing containers, otherwise the module will recreate them. You can find out the '
'correct value by running "docker inspect --format \'{{.HostConfig.NetworkMode}}\' <container_name>"',
version='2.14'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,225 |
docker_container does not allow host port ranges binded to a single container port
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using docker_container module, attempting to publish a port, using a range of ports from the host, does not work as expected. Instead, Ansible will attempt to use the port number in the range.
Example:
Trying to get a similar docker command as the following running in Ansible.
```
docker run -p 80-85:80 -d linuxserver/nginx
```
As of now, this will cause an error stating `Bind for 0.0.0.0:80 failed: port is already allocated"`
This is a supported command argument in Docker.
Ref: https://docs.docker.com/network/links/
> Instead, you may specify a range of host ports to bind a container port to that is different than the default ephemeral port range:
>
> `$ docker run -d -p 8000-9000:5000 training/webapp python app.py`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
(venv) ➜ cloudbox git:(develop) ✗ ansible --version
ansible 2.10.0.dev0
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
```
cloudbox git:(develop) ✗ ansible --version
ansible 2.9.2
config file = /srv/git/cloudbox/ansible.cfg
configured module search path = [u'/home/seed/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/srv/git/cloudbox/ansible.cfg) = [u'profile_tasks']
DEFAULT_FACT_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/ansible_facts.d
DEFAULT_HASH_BEHAVIOUR(/srv/git/cloudbox/ansible.cfg) = merge
DEFAULT_HOST_LIST(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/inventories/local']
DEFAULT_LOG_PATH(/srv/git/cloudbox/ansible.cfg) = /srv/git/cloudbox/cloudbox.log
DEFAULT_ROLES_PATH(/srv/git/cloudbox/ansible.cfg) = [u'/srv/git/cloudbox/roles', u'/srv/git/cloudbox/resources/roles']
DEFAULT_VAULT_PASSWORD_FILE(/srv/git/cloudbox/ansible.cfg) = /etc/ansible/.ansible_vault
RETRY_FILES_ENABLED(/srv/git/cloudbox/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 LTS. Bare-metal setup
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Create and start container
docker_container:
name: nginx
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx2
purge_networks: yes
restart_policy: unless-stopped
state: started
- name: Create and start container
docker_container:
name: nginx2
image: linuxserver/nginx
pull: yes
published_ports:
- "80-85:80"
networks:
- name: cloudbox
aliases:
- nginx3
purge_networks: yes
restart_policy: unless-stopped
state: started
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Containers nginx2 and nginx3 to be created and binded to whatever host port was available between 80 and 85.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
https://pastebin.com/MxMrGaHm
Both containers are being binded to port 80 on the host.
|
https://github.com/ansible/ansible/issues/66225
|
https://github.com/ansible/ansible/pull/66382
|
21ae66db2ecea3fef21b9b73b5e890809d58631e
|
23b2bb4f4dc68ffa385e74b5d5c304f461887965
| 2020-01-06T21:26:54Z |
python
| 2020-02-03T22:27:40Z |
test/integration/targets/docker_container/tasks/tests/ports.yml
|
---
- name: Registering container name
set_fact:
cname: "{{ cname_prefix ~ '-options' }}"
####################################################################
## published_ports: all ############################################
####################################################################
- name: published_ports -- all
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9002"
published_ports:
- all
force_kill: yes
register: published_ports_1
- name: published_ports -- all (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9002"
published_ports:
- all
force_kill: yes
register: published_ports_2
- name: published_ports -- all (writing out 'all')
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9002"
published_ports:
- "9001"
- "9002"
force_kill: yes
register: published_ports_3
- name: published_ports -- all (idempotency 2)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9002"
published_ports:
- "9002"
- "9001"
force_kill: yes
register: published_ports_4
- name: published_ports -- all (switching back to 'all')
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9002"
published_ports:
- all
force_kill: yes
register: published_ports_5
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is changed
- published_ports_4 is not changed
- published_ports_5 is changed
####################################################################
## published_ports: port range #####################################
####################################################################
- name: published_ports -- port range
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9010-9050"
published_ports:
- "9001:9001"
- "9010-9050:9010-9050"
force_kill: yes
register: published_ports_1
- name: published_ports -- port range (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9010-9050"
published_ports:
- "9001:9001"
- "9010-9050:9010-9050"
force_kill: yes
register: published_ports_2
- name: published_ports -- port range (different range)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9010-9050"
published_ports:
- "9001:9001"
- "9020-9060:9020-9060"
force_kill: yes
register: published_ports_3
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is changed
####################################################################
## published_ports: IPv6 addresses #################################
####################################################################
- name: published_ports -- IPv6
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- "[::1]:9001:9001"
force_kill: yes
register: published_ports_1
- name: published_ports -- IPv6 (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- "[::1]:9001:9001"
force_kill: yes
register: published_ports_2
- name: published_ports -- IPv6 (different IP)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- "127.0.0.1:9001:9001"
force_kill: yes
register: published_ports_3
- name: published_ports -- IPv6 (hostname)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- "localhost:9001:9001"
force_kill: yes
register: published_ports_4
ignore_errors: yes
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is changed
- published_ports_4 is failed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,341 |
Add UNC path support to the Fetch action
|
##### SUMMARY
Fetch module fails with path not found error when used with UNC path.
##### ISSUE TYPE
- ~Bug Report~
- Feature Idea
##### COMPONENT NAME
Fetch module
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ikanse/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
No changes from the default config
```
##### OS / ENVIRONMENT
```
Control node:
Fedora release 30 (Thirty)
Manged node:
Windows Server 2016 Datacenter
Version: 1607
```
##### STEPS TO REPRODUCE
Create a share on the managed Windows node.
```
- hosts: localhost
gather_facts: false
tasks:
- name: Add hostname supplied by variable to adhoc_group
add_host:
name: "HOST"
groups: adhoc_group
ansible_user: Administrator
ansible_password: 'PASSWORD'
ansible_connection: winrm
ansible_winrm_transport: basic
ansible_winrm_server_cert_validation: ignore
ansible_winrm_connection_timeout: 600
- hosts: adhoc_group
tasks:
- name: win copy
win_copy:
dest: '\\EC2AMAZ-T130RGR\testshare\test123.txt'
src: testvars.yml
- name: access file
fetch:
src: '\\EC2AMAZ-T130RGR\testshare\test123.txt'
dest: /tmp/
flat: yes
```
##### EXPECTED RESULTS
Path is rendered correctly and the file is fetched from the remote server.
##### ACTUAL RESULTS
The path is not rendered correctly by fetch module:
```
"msg": "Path EC2AMAZ-T130RGR\\testshare\\test123.txt is not found"
```
From win_copy module we can see that correct path is used:
```
"dest": "\\\\EC2AMAZ-T130RGR\\testshare\\test123.txt",
```
```
ansible-playbook 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ikanse/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAYBOOK: main.yml *************************************************************************************
2 plays in main.yml
PLAY [localhost] ***************************************************************************************
META: ran handlers
TASK [Add hostname supplied by variable to adhoc_group] ************************************************
task path: /home/ikanse/ansible/windows/main.yml:5
creating host via 'add_host': hostname=13.235.83.165
changed: [localhost] => {
"add_host": {
"groups": [
"adhoc_group"
],
"host_name": "13.235.83.165",
"host_vars": {
"ansible_connection": "winrm",
"ansible_password": "PASSWORD",
"ansible_user": "Administrator",
"ansible_winrm_connection_timeout": 600,
"ansible_winrm_server_cert_validation": "ignore",
"ansible_winrm_transport": "basic"
}
},
"changed": true
}
META: ran handlers
META: ran handlers
PLAY [adhoc_group] *************************************************************************************
TASK [Gathering Facts] *********************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:16
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
ok: [13.235.83.165]
META: ran handlers
TASK [win copy] ****************************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:18
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/win_copy.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
ok: [13.235.83.165] => {
"changed": false,
"checksum": "4e8bfbc031942c909e62592f6a3e728af39c156c",
"dest": "\\\\EC2AMAZ-T130RGR\\testshare\\test123.txt",
"operation": "file_copy",
"original_basename": "testvars.yml",
"size": 15,
"src": "testvars.yml"
}
TASK [access file] *************************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:23
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/win_stat.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/slurp.ps1
Pipelining is enabled.
EXEC (via pipeline wrapper)
fatal: [13.235.83.165]: FAILED! => {
"changed": false,
"msg": "Path EC2AMAZ-T130RGR\\testshare\\test123.txt is not found"
}
PLAY RECAP *********************************************************************************************
13.235.83.165 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66341
|
https://github.com/ansible/ansible/pull/66604
|
81378b3e744cd0d13b33d18a4f8a38aeb8a6e97a
|
fc7980af9a42676913b4054163570ee438b82e9c
| 2020-01-10T09:31:47Z |
python
| 2020-02-04T06:34:11Z |
changelogs/fragments/66604-powershell-unc-paths.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,341 |
Add UNC path support to the Fetch action
|
##### SUMMARY
Fetch module fails with path not found error when used with UNC path.
##### ISSUE TYPE
- ~Bug Report~
- Feature Idea
##### COMPONENT NAME
Fetch module
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ikanse/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
No changes from the default config
```
##### OS / ENVIRONMENT
```
Control node:
Fedora release 30 (Thirty)
Manged node:
Windows Server 2016 Datacenter
Version: 1607
```
##### STEPS TO REPRODUCE
Create a share on the managed Windows node.
```
- hosts: localhost
gather_facts: false
tasks:
- name: Add hostname supplied by variable to adhoc_group
add_host:
name: "HOST"
groups: adhoc_group
ansible_user: Administrator
ansible_password: 'PASSWORD'
ansible_connection: winrm
ansible_winrm_transport: basic
ansible_winrm_server_cert_validation: ignore
ansible_winrm_connection_timeout: 600
- hosts: adhoc_group
tasks:
- name: win copy
win_copy:
dest: '\\EC2AMAZ-T130RGR\testshare\test123.txt'
src: testvars.yml
- name: access file
fetch:
src: '\\EC2AMAZ-T130RGR\testshare\test123.txt'
dest: /tmp/
flat: yes
```
##### EXPECTED RESULTS
Path is rendered correctly and the file is fetched from the remote server.
##### ACTUAL RESULTS
The path is not rendered correctly by fetch module:
```
"msg": "Path EC2AMAZ-T130RGR\\testshare\\test123.txt is not found"
```
From win_copy module we can see that correct path is used:
```
"dest": "\\\\EC2AMAZ-T130RGR\\testshare\\test123.txt",
```
```
ansible-playbook 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ikanse/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAYBOOK: main.yml *************************************************************************************
2 plays in main.yml
PLAY [localhost] ***************************************************************************************
META: ran handlers
TASK [Add hostname supplied by variable to adhoc_group] ************************************************
task path: /home/ikanse/ansible/windows/main.yml:5
creating host via 'add_host': hostname=13.235.83.165
changed: [localhost] => {
"add_host": {
"groups": [
"adhoc_group"
],
"host_name": "13.235.83.165",
"host_vars": {
"ansible_connection": "winrm",
"ansible_password": "PASSWORD",
"ansible_user": "Administrator",
"ansible_winrm_connection_timeout": 600,
"ansible_winrm_server_cert_validation": "ignore",
"ansible_winrm_transport": "basic"
}
},
"changed": true
}
META: ran handlers
META: ran handlers
PLAY [adhoc_group] *************************************************************************************
TASK [Gathering Facts] *********************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:16
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
ok: [13.235.83.165]
META: ran handlers
TASK [win copy] ****************************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:18
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/win_copy.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
ok: [13.235.83.165] => {
"changed": false,
"checksum": "4e8bfbc031942c909e62592f6a3e728af39c156c",
"dest": "\\\\EC2AMAZ-T130RGR\\testshare\\test123.txt",
"operation": "file_copy",
"original_basename": "testvars.yml",
"size": 15,
"src": "testvars.yml"
}
TASK [access file] *************************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:23
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/win_stat.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/slurp.ps1
Pipelining is enabled.
EXEC (via pipeline wrapper)
fatal: [13.235.83.165]: FAILED! => {
"changed": false,
"msg": "Path EC2AMAZ-T130RGR\\testshare\\test123.txt is not found"
}
PLAY RECAP *********************************************************************************************
13.235.83.165 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66341
|
https://github.com/ansible/ansible/pull/66604
|
81378b3e744cd0d13b33d18a4f8a38aeb8a6e97a
|
fc7980af9a42676913b4054163570ee438b82e9c
| 2020-01-10T09:31:47Z |
python
| 2020-02-04T06:34:11Z |
lib/ansible/plugins/shell/powershell.py
|
# Copyright (c) 2014, Chris Church <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: powershell
plugin_type: shell
version_added: historical
short_description: Windows PowerShell
description:
- The only option when using 'winrm' or 'psrp' as a connection plugin.
- Can also be used when using 'ssh' as a connection plugin and the C(DefaultShell) has been configured to PowerShell.
extends_documentation_fragment:
- shell_windows
'''
import base64
import os
import re
import shlex
import pkgutil
import xml.etree.ElementTree as ET
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_text
from ansible.plugins.shell import ShellBase
_common_args = ['PowerShell', '-NoProfile', '-NonInteractive', '-ExecutionPolicy', 'Unrestricted']
# Primarily for testing, allow explicitly specifying PowerShell version via
# an environment variable.
_powershell_version = os.environ.get('POWERSHELL_VERSION', None)
if _powershell_version:
_common_args = ['PowerShell', '-Version', _powershell_version] + _common_args[1:]
def _parse_clixml(data, stream="Error"):
"""
Takes a byte string like '#< CLIXML\r\n<Objs...' and extracts the stream
message encoded in the XML data. CLIXML is used by PowerShell to encode
multiple objects in stderr.
"""
clixml = ET.fromstring(data.split(b"\r\n", 1)[-1])
namespace_match = re.match(r'{(.*)}', clixml.tag)
namespace = "{%s}" % namespace_match.group(1) if namespace_match else ""
strings = clixml.findall("./%sS" % namespace)
lines = [e.text.replace('_x000D__x000A_', '') for e in strings if e.attrib.get('S') == stream]
return to_bytes('\r\n'.join(lines))
class ShellModule(ShellBase):
# Common shell filenames that this plugin handles
# Powershell is handled differently. It's selected when winrm is the
# connection
COMPATIBLE_SHELLS = frozenset()
# Family of shells this has. Must match the filename without extension
SHELL_FAMILY = 'powershell'
_SHELL_REDIRECT_ALLNULL = '> $null'
_SHELL_AND = ';'
# Used by various parts of Ansible to do Windows specific changes
_IS_WINDOWS = True
env = dict()
# We're being overly cautious about which keys to accept (more so than
# the Windows environment is capable of doing), since the powershell
# env provider's limitations don't appear to be documented.
safe_envkey = re.compile(r'^[\d\w_]{1,255}$')
# TODO: add binary module support
def assert_safe_env_key(self, key):
if not self.safe_envkey.match(key):
raise AnsibleError("Invalid PowerShell environment key: %s" % key)
return key
def safe_env_value(self, key, value):
if len(value) > 32767:
raise AnsibleError("PowerShell environment value for key '%s' exceeds 32767 characters in length" % key)
# powershell single quoted literals need single-quote doubling as their only escaping
value = value.replace("'", "''")
return to_text(value, errors='surrogate_or_strict')
def env_prefix(self, **kwargs):
# powershell/winrm env handling is handled in the exec wrapper
return ""
def join_path(self, *args):
parts = []
for arg in args:
arg = self._unquote(arg).replace('/', '\\')
parts.extend([a for a in arg.split('\\') if a])
path = '\\'.join(parts)
if path.startswith('~'):
return path
return path
def get_remote_filename(self, pathname):
# powershell requires that script files end with .ps1
base_name = os.path.basename(pathname.strip())
name, ext = os.path.splitext(base_name.strip())
if ext.lower() not in ['.ps1', '.exe']:
return name + '.ps1'
return base_name.strip()
def path_has_trailing_slash(self, path):
# Allow Windows paths to be specified using either slash.
path = self._unquote(path)
return path.endswith('/') or path.endswith('\\')
def chmod(self, paths, mode):
raise NotImplementedError('chmod is not implemented for Powershell')
def chown(self, paths, user):
raise NotImplementedError('chown is not implemented for Powershell')
def set_user_facl(self, paths, user, mode):
raise NotImplementedError('set_user_facl is not implemented for Powershell')
def remove(self, path, recurse=False):
path = self._escape(self._unquote(path))
if recurse:
return self._encode_script('''Remove-Item "%s" -Force -Recurse;''' % path)
else:
return self._encode_script('''Remove-Item "%s" -Force;''' % path)
def mkdtemp(self, basefile=None, system=False, mode=None, tmpdir=None):
# Windows does not have an equivalent for the system temp files, so
# the param is ignored
basefile = self._escape(self._unquote(basefile))
basetmpdir = tmpdir if tmpdir else self.get_option('remote_tmp')
script = '''
$tmp_path = [System.Environment]::ExpandEnvironmentVariables('%s')
$tmp = New-Item -Type Directory -Path $tmp_path -Name '%s'
Write-Output -InputObject $tmp.FullName
''' % (basetmpdir, basefile)
return self._encode_script(script.strip())
def expand_user(self, user_home_path, username=''):
# PowerShell only supports "~" (not "~username"). Resolve-Path ~ does
# not seem to work remotely, though by default we are always starting
# in the user's home directory.
user_home_path = self._unquote(user_home_path)
if user_home_path == '~':
script = 'Write-Output (Get-Location).Path'
elif user_home_path.startswith('~\\'):
script = 'Write-Output ((Get-Location).Path + "%s")' % self._escape(user_home_path[1:])
else:
script = 'Write-Output "%s"' % self._escape(user_home_path)
return self._encode_script(script)
def exists(self, path):
path = self._escape(self._unquote(path))
script = '''
If (Test-Path "%s")
{
$res = 0;
}
Else
{
$res = 1;
}
Write-Output "$res";
Exit $res;
''' % path
return self._encode_script(script)
def checksum(self, path, *args, **kwargs):
path = self._escape(self._unquote(path))
script = '''
If (Test-Path -PathType Leaf "%(path)s")
{
$sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider;
$fp = [System.IO.File]::Open("%(path)s", [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read);
[System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower();
$fp.Dispose();
}
ElseIf (Test-Path -PathType Container "%(path)s")
{
Write-Output "3";
}
Else
{
Write-Output "1";
}
''' % dict(path=path)
return self._encode_script(script)
def build_module_command(self, env_string, shebang, cmd, arg_path=None):
bootstrap_wrapper = pkgutil.get_data("ansible.executor.powershell", "bootstrap_wrapper.ps1")
# pipelining bypass
if cmd == '':
return self._encode_script(script=bootstrap_wrapper, strict_mode=False, preserve_rc=False)
# non-pipelining
cmd_parts = shlex.split(cmd, posix=False)
cmd_parts = list(map(to_text, cmd_parts))
if shebang and shebang.lower() == '#!powershell':
if not self._unquote(cmd_parts[0]).lower().endswith('.ps1'):
# we're running a module via the bootstrap wrapper
cmd_parts[0] = '"%s.ps1"' % self._unquote(cmd_parts[0])
wrapper_cmd = "type " + cmd_parts[0] + " | " + self._encode_script(script=bootstrap_wrapper, strict_mode=False, preserve_rc=False)
return wrapper_cmd
elif shebang and shebang.startswith('#!'):
cmd_parts.insert(0, shebang[2:])
elif not shebang:
# The module is assumed to be a binary
cmd_parts[0] = self._unquote(cmd_parts[0])
cmd_parts.append(arg_path)
script = '''
Try
{
%s
%s
}
Catch
{
$_obj = @{ failed = $true }
If ($_.Exception.GetType)
{
$_obj.Add('msg', $_.Exception.Message)
}
Else
{
$_obj.Add('msg', $_.ToString())
}
If ($_.InvocationInfo.PositionMessage)
{
$_obj.Add('exception', $_.InvocationInfo.PositionMessage)
}
ElseIf ($_.ScriptStackTrace)
{
$_obj.Add('exception', $_.ScriptStackTrace)
}
Try
{
$_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json))
}
Catch
{
}
Echo $_obj | ConvertTo-Json -Compress -Depth 99
Exit 1
}
''' % (env_string, ' '.join(cmd_parts))
return self._encode_script(script, preserve_rc=False)
def wrap_for_exec(self, cmd):
return '& %s; exit $LASTEXITCODE' % cmd
def _unquote(self, value):
'''Remove any matching quotes that wrap the given value.'''
value = to_text(value or '')
m = re.match(r'^\s*?\'(.*?)\'\s*?$', value)
if m:
return m.group(1)
m = re.match(r'^\s*?"(.*?)"\s*?$', value)
if m:
return m.group(1)
return value
def _escape(self, value, include_vars=False):
'''Return value escaped for use in PowerShell command.'''
# http://www.techotopia.com/index.php/Windows_PowerShell_1.0_String_Quoting_and_Escape_Sequences
# http://stackoverflow.com/questions/764360/a-list-of-string-replacements-in-python
subs = [('\n', '`n'), ('\r', '`r'), ('\t', '`t'), ('\a', '`a'),
('\b', '`b'), ('\f', '`f'), ('\v', '`v'), ('"', '`"'),
('\'', '`\''), ('`', '``'), ('\x00', '`0')]
if include_vars:
subs.append(('$', '`$'))
pattern = '|'.join('(%s)' % re.escape(p) for p, s in subs)
substs = [s for p, s in subs]
def replace(m):
return substs[m.lastindex - 1]
return re.sub(pattern, replace, value)
def _encode_script(self, script, as_list=False, strict_mode=True, preserve_rc=True):
'''Convert a PowerShell script to a single base64-encoded command.'''
script = to_text(script)
if script == u'-':
cmd_parts = _common_args + ['-Command', '-']
else:
if strict_mode:
script = u'Set-StrictMode -Version Latest\r\n%s' % script
# try to propagate exit code if present- won't work with begin/process/end-style scripts (ala put_file)
# NB: the exit code returned may be incorrect in the case of a successful command followed by an invalid command
if preserve_rc:
script = u'%s\r\nIf (-not $?) { If (Get-Variable LASTEXITCODE -ErrorAction SilentlyContinue) { exit $LASTEXITCODE } Else { exit 1 } }\r\n'\
% script
script = '\n'.join([x.strip() for x in script.splitlines() if x.strip()])
encoded_script = to_text(base64.b64encode(script.encode('utf-16-le')), 'utf-8')
cmd_parts = _common_args + ['-EncodedCommand', encoded_script]
if as_list:
return cmd_parts
return ' '.join(cmd_parts)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,341 |
Add UNC path support to the Fetch action
|
##### SUMMARY
Fetch module fails with path not found error when used with UNC path.
##### ISSUE TYPE
- ~Bug Report~
- Feature Idea
##### COMPONENT NAME
Fetch module
##### ANSIBLE VERSION
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ikanse/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
No changes from the default config
```
##### OS / ENVIRONMENT
```
Control node:
Fedora release 30 (Thirty)
Manged node:
Windows Server 2016 Datacenter
Version: 1607
```
##### STEPS TO REPRODUCE
Create a share on the managed Windows node.
```
- hosts: localhost
gather_facts: false
tasks:
- name: Add hostname supplied by variable to adhoc_group
add_host:
name: "HOST"
groups: adhoc_group
ansible_user: Administrator
ansible_password: 'PASSWORD'
ansible_connection: winrm
ansible_winrm_transport: basic
ansible_winrm_server_cert_validation: ignore
ansible_winrm_connection_timeout: 600
- hosts: adhoc_group
tasks:
- name: win copy
win_copy:
dest: '\\EC2AMAZ-T130RGR\testshare\test123.txt'
src: testvars.yml
- name: access file
fetch:
src: '\\EC2AMAZ-T130RGR\testshare\test123.txt'
dest: /tmp/
flat: yes
```
##### EXPECTED RESULTS
Path is rendered correctly and the file is fetched from the remote server.
##### ACTUAL RESULTS
The path is not rendered correctly by fetch module:
```
"msg": "Path EC2AMAZ-T130RGR\\testshare\\test123.txt is not found"
```
From win_copy module we can see that correct path is used:
```
"dest": "\\\\EC2AMAZ-T130RGR\\testshare\\test123.txt",
```
```
ansible-playbook 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ikanse/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.7.5 (default, Oct 17 2019, 12:09:47) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAYBOOK: main.yml *************************************************************************************
2 plays in main.yml
PLAY [localhost] ***************************************************************************************
META: ran handlers
TASK [Add hostname supplied by variable to adhoc_group] ************************************************
task path: /home/ikanse/ansible/windows/main.yml:5
creating host via 'add_host': hostname=13.235.83.165
changed: [localhost] => {
"add_host": {
"groups": [
"adhoc_group"
],
"host_name": "13.235.83.165",
"host_vars": {
"ansible_connection": "winrm",
"ansible_password": "PASSWORD",
"ansible_user": "Administrator",
"ansible_winrm_connection_timeout": 600,
"ansible_winrm_server_cert_validation": "ignore",
"ansible_winrm_transport": "basic"
}
},
"changed": true
}
META: ran handlers
META: ran handlers
PLAY [adhoc_group] *************************************************************************************
TASK [Gathering Facts] *********************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:16
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/setup.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
ok: [13.235.83.165]
META: ran handlers
TASK [win copy] ****************************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:18
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/win_copy.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
ok: [13.235.83.165] => {
"changed": false,
"checksum": "4e8bfbc031942c909e62592f6a3e728af39c156c",
"dest": "\\\\EC2AMAZ-T130RGR\\testshare\\test123.txt",
"operation": "file_copy",
"original_basename": "testvars.yml",
"size": 15,
"src": "testvars.yml"
}
TASK [access file] *************************************************************************************
task path: /home/ikanse/ansible/windows/main.yml:23
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/win_stat.ps1
Pipelining is enabled.
<13.235.83.165> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO 13.235.83.165
EXEC (via pipeline wrapper)
Using module file /usr/lib/python3.7/site-packages/ansible/modules/windows/slurp.ps1
Pipelining is enabled.
EXEC (via pipeline wrapper)
fatal: [13.235.83.165]: FAILED! => {
"changed": false,
"msg": "Path EC2AMAZ-T130RGR\\testshare\\test123.txt is not found"
}
PLAY RECAP *********************************************************************************************
13.235.83.165 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66341
|
https://github.com/ansible/ansible/pull/66604
|
81378b3e744cd0d13b33d18a4f8a38aeb8a6e97a
|
fc7980af9a42676913b4054163570ee438b82e9c
| 2020-01-10T09:31:47Z |
python
| 2020-02-04T06:34:11Z |
test/units/plugins/shell/test_powershell.py
|
from ansible.plugins.shell.powershell import _parse_clixml
def test_parse_clixml_empty():
empty = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"></Objs>'
expected = b''
actual = _parse_clixml(empty)
assert actual == expected
def test_parse_clixml_with_progress():
progress = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">' \
b'<Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS>' \
b'<I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil />' \
b'<PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>'
expected = b''
actual = _parse_clixml(progress)
assert actual == expected
def test_parse_clixml_single_stream():
single_stream = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">' \
b'<S S="Error">fake : The term \'fake\' is not recognized as the name of a cmdlet. Check _x000D__x000A_</S>' \
b'<S S="Error">the spelling of the name, or if a path was included._x000D__x000A_</S>' \
b'<S S="Error">At line:1 char:1_x000D__x000A_</S>' \
b'<S S="Error">+ fake cmdlet_x000D__x000A_</S><S S="Error">+ ~~~~_x000D__x000A_</S>' \
b'<S S="Error"> + CategoryInfo : ObjectNotFound: (fake:String) [], CommandNotFoundException_x000D__x000A_</S>' \
b'<S S="Error"> + FullyQualifiedErrorId : CommandNotFoundException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S>' \
b'</Objs>'
expected = b"fake : The term 'fake' is not recognized as the name of a cmdlet. Check \r\n" \
b"the spelling of the name, or if a path was included.\r\n" \
b"At line:1 char:1\r\n" \
b"+ fake cmdlet\r\n" \
b"+ ~~~~\r\n" \
b" + CategoryInfo : ObjectNotFound: (fake:String) [], CommandNotFoundException\r\n" \
b" + FullyQualifiedErrorId : CommandNotFoundException\r\n "
actual = _parse_clixml(single_stream)
assert actual == expected
def test_parse_clixml_multiple_streams():
multiple_stream = b'#< CLIXML\r\n<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">' \
b'<S S="Error">fake : The term \'fake\' is not recognized as the name of a cmdlet. Check _x000D__x000A_</S>' \
b'<S S="Error">the spelling of the name, or if a path was included._x000D__x000A_</S>' \
b'<S S="Error">At line:1 char:1_x000D__x000A_</S>' \
b'<S S="Error">+ fake cmdlet_x000D__x000A_</S><S S="Error">+ ~~~~_x000D__x000A_</S>' \
b'<S S="Error"> + CategoryInfo : ObjectNotFound: (fake:String) [], CommandNotFoundException_x000D__x000A_</S>' \
b'<S S="Error"> + FullyQualifiedErrorId : CommandNotFoundException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S>' \
b'<S S="Info">hi info</S>' \
b'</Objs>'
expected = b"hi info"
actual = _parse_clixml(multiple_stream, stream="Info")
assert actual == expected
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,366 |
ANSIBLE_DUPLICATE_YAML_DICT_KEY=error crashes with a bug report
|
##### SUMMARY
This is a follow-up to #16903.
Setting the new env var `ANSIBLE_DUPLICATE_YAML_DICT_KEY` implemented in #56933 to `error` outputs a cryptic `ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'line'` instead of the message defined [here](https://github.com/ansible/ansible/blob/d335d7a62c022d702a29a0ff55cd0c526ec2c5ad/lib/ansible/parsing/yaml/constructor.py#L74). I think this will confuse people, since it's not obvious what causes the problem in a larger project. (cc @bcoca)
Setting `ANSIBLE_DUPLICATE_YAML_DICT_KEY=ignore` or `ANSIBLE_DUPLICATE_YAML_DICT_KEY=warn` works as expected.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
YAML parsing?
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/Users/mtodor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
```
N/A
```
##### OS / ENVIRONMENT
```
OSX / Mojave
```
##### STEPS TO REPRODUCE
`vars.yaml`:
```yaml
---
x-default: &default-customer
ID: foo
Customers:
- <<: *default-customer
ID: bar
```
`playbook.yaml`:
```yaml
---
- hosts: localhost
gather_facts: no
vars_files:
- vars.yml
tasks:
- meta: end_play
```
##### EXPECTED RESULTS
Setting `ANSIBLE_DUPLICATE_YAML_DICT_KEY=error` should output something like `ERROR! Syntax Error while loading YAML.`
##### ACTUAL RESULTS
A cryptic error about an Ansible bug gets emitted:
```
> ANSIBLE_DUPLICATE_YAML_DICT_KEY=error AWS_PROFILE=build ansible-playbook -i local, playbook.yml
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'line'
to see the full traceback, use -vvv
```
Verbose output:
```
> ANSIBLE_DUPLICATE_YAML_DICT_KEY=error AWS_PROFILE=build ansible-playbook -vvv -i local, playbook.yml
ansible-playbook 2.9.1
config file = None
configured module search path = ['/Users/mtodor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
No config file found; using defaults
Parsed local, inventory source with host_list plugin
PLAYBOOK: playbook.yml *****************************************************************************************************************************************************************************************************************************************************
1 plays in playbook.yml
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'line'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 70, in from_yaml
new_data = json.loads(data, cls=AnsibleJSONDecoder)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 361, in loads
return cls(**kw).decode(s)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 74, in from_yaml
new_data = _safe_load(data, file_name=file_name, vault_secrets=vault_secrets)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 49, in _safe_load
return loader.get_single_data()
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/yaml/constructor.py", line 43, in get_single_data
return self.construct_document(node)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/yaml/constructor.py", line 52, in construct_document
for dummy in generator:
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/yaml/constructor.py", line 47, in construct_yaml_map
value = self.construct_mapping(node)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/yaml/constructor.py", line 79, in construct_mapping
raise ConstructorError(to_native(msg))
yaml.constructor.ConstructorError: While constructing a mapping from /Users/mtodor/vars.yml, line 7, column 5, found a duplicate dict key (ID). Using last defined value only.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/executor/playbook_executor.py", line 116, in run
all_vars = self._variable_manager.get_vars(play=play)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/vars/manager.py", line 349, in get_vars
data = preprocess_vars(self._loader.load_from_file(vars_file, unsafe=True))
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/dataloader.py", line 89, in load_from_file
parsed_data = self.load(data=file_data, file_name=file_name, show_content=show_content)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/dataloader.py", line 72, in load
return from_yaml(data, file_name, show_content, self._vault.secrets)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 76, in from_yaml
_handle_error(yaml_exc, file_name, show_content)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 37, in _handle_error
err_obj.ansible_pos = (file_name, yaml_exc.problem_mark.line + 1, yaml_exc.problem_mark.column + 1)
AttributeError: 'NoneType' object has no attribute 'line'
```
|
https://github.com/ansible/ansible/issues/65366
|
https://github.com/ansible/ansible/pull/66786
|
3b32f95fb39a0faf810bf3aa6024d704d99c7156
|
994a6b0c5a7929051e5e2101004ef536ec47c0b3
| 2019-11-29T14:42:57Z |
python
| 2020-02-04T18:53:13Z |
changelogs/fragments/66786-fix-duplicate-yaml-key-error.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,366 |
ANSIBLE_DUPLICATE_YAML_DICT_KEY=error crashes with a bug report
|
##### SUMMARY
This is a follow-up to #16903.
Setting the new env var `ANSIBLE_DUPLICATE_YAML_DICT_KEY` implemented in #56933 to `error` outputs a cryptic `ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'line'` instead of the message defined [here](https://github.com/ansible/ansible/blob/d335d7a62c022d702a29a0ff55cd0c526ec2c5ad/lib/ansible/parsing/yaml/constructor.py#L74). I think this will confuse people, since it's not obvious what causes the problem in a larger project. (cc @bcoca)
Setting `ANSIBLE_DUPLICATE_YAML_DICT_KEY=ignore` or `ANSIBLE_DUPLICATE_YAML_DICT_KEY=warn` works as expected.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
YAML parsing?
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = None
configured module search path = ['/Users/mtodor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
```
N/A
```
##### OS / ENVIRONMENT
```
OSX / Mojave
```
##### STEPS TO REPRODUCE
`vars.yaml`:
```yaml
---
x-default: &default-customer
ID: foo
Customers:
- <<: *default-customer
ID: bar
```
`playbook.yaml`:
```yaml
---
- hosts: localhost
gather_facts: no
vars_files:
- vars.yml
tasks:
- meta: end_play
```
##### EXPECTED RESULTS
Setting `ANSIBLE_DUPLICATE_YAML_DICT_KEY=error` should output something like `ERROR! Syntax Error while loading YAML.`
##### ACTUAL RESULTS
A cryptic error about an Ansible bug gets emitted:
```
> ANSIBLE_DUPLICATE_YAML_DICT_KEY=error AWS_PROFILE=build ansible-playbook -i local, playbook.yml
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'line'
to see the full traceback, use -vvv
```
Verbose output:
```
> ANSIBLE_DUPLICATE_YAML_DICT_KEY=error AWS_PROFILE=build ansible-playbook -vvv -i local, playbook.yml
ansible-playbook 2.9.1
config file = None
configured module search path = ['/Users/mtodor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
No config file found; using defaults
Parsed local, inventory source with host_list plugin
PLAYBOOK: playbook.yml *****************************************************************************************************************************************************************************************************************************************************
1 plays in playbook.yml
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object has no attribute 'line'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 70, in from_yaml
new_data = json.loads(data, cls=AnsibleJSONDecoder)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 361, in loads
return cls(**kw).decode(s)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 74, in from_yaml
new_data = _safe_load(data, file_name=file_name, vault_secrets=vault_secrets)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 49, in _safe_load
return loader.get_single_data()
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/yaml/constructor.py", line 43, in get_single_data
return self.construct_document(node)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/yaml/constructor.py", line 52, in construct_document
for dummy in generator:
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/yaml/constructor.py", line 47, in construct_yaml_map
value = self.construct_mapping(node)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/yaml/constructor.py", line 79, in construct_mapping
raise ConstructorError(to_native(msg))
yaml.constructor.ConstructorError: While constructing a mapping from /Users/mtodor/vars.yml, line 7, column 5, found a duplicate dict key (ID). Using last defined value only.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/cli/playbook.py", line 127, in run
results = pbex.run()
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/executor/playbook_executor.py", line 116, in run
all_vars = self._variable_manager.get_vars(play=play)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/vars/manager.py", line 349, in get_vars
data = preprocess_vars(self._loader.load_from_file(vars_file, unsafe=True))
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/dataloader.py", line 89, in load_from_file
parsed_data = self.load(data=file_data, file_name=file_name, show_content=show_content)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/dataloader.py", line 72, in load
return from_yaml(data, file_name, show_content, self._vault.secrets)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 76, in from_yaml
_handle_error(yaml_exc, file_name, show_content)
File "/usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/parsing/utils/yaml.py", line 37, in _handle_error
err_obj.ansible_pos = (file_name, yaml_exc.problem_mark.line + 1, yaml_exc.problem_mark.column + 1)
AttributeError: 'NoneType' object has no attribute 'line'
```
|
https://github.com/ansible/ansible/issues/65366
|
https://github.com/ansible/ansible/pull/66786
|
3b32f95fb39a0faf810bf3aa6024d704d99c7156
|
994a6b0c5a7929051e5e2101004ef536ec47c0b3
| 2019-11-29T14:42:57Z |
python
| 2020-02-04T18:53:13Z |
lib/ansible/parsing/yaml/constructor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from yaml.constructor import SafeConstructor, ConstructorError
from yaml.nodes import MappingNode
from ansible import constants as C
from ansible.module_utils._text import to_bytes, to_native
from ansible.parsing.yaml.objects import AnsibleMapping, AnsibleSequence, AnsibleUnicode
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils.unsafe_proxy import wrap_var
from ansible.parsing.vault import VaultLib
from ansible.utils.display import Display
display = Display()
class AnsibleConstructor(SafeConstructor):
def __init__(self, file_name=None, vault_secrets=None):
self._ansible_file_name = file_name
super(AnsibleConstructor, self).__init__()
self._vaults = {}
self.vault_secrets = vault_secrets or []
self._vaults['default'] = VaultLib(secrets=self.vault_secrets)
def construct_yaml_map(self, node):
data = AnsibleMapping()
yield data
value = self.construct_mapping(node)
data.update(value)
data.ansible_pos = self._node_position_info(node)
def construct_mapping(self, node, deep=False):
# Most of this is from yaml.constructor.SafeConstructor. We replicate
# it here so that we can warn users when they have duplicate dict keys
# (pyyaml silently allows overwriting keys)
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
self.flatten_mapping(node)
mapping = AnsibleMapping()
# Add our extra information to the returned value
mapping.ansible_pos = self._node_position_info(node)
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
try:
hash(key)
except TypeError as exc:
raise ConstructorError("while constructing a mapping", node.start_mark,
"found unacceptable key (%s)" % exc, key_node.start_mark)
if key in mapping:
msg = (u'While constructing a mapping from {1}, line {2}, column {3}, found a duplicate dict key ({0}).'
u' Using last defined value only.'.format(key, *mapping.ansible_pos))
if C.DUPLICATE_YAML_DICT_KEY == 'warn':
display.warning(msg)
elif C.DUPLICATE_YAML_DICT_KEY == 'error':
raise ConstructorError(to_native(msg))
else:
# when 'ignore'
display.debug(msg)
value = self.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
def construct_yaml_str(self, node):
# Override the default string handling function
# to always return unicode objects
value = self.construct_scalar(node)
ret = AnsibleUnicode(value)
ret.ansible_pos = self._node_position_info(node)
return ret
def construct_vault_encrypted_unicode(self, node):
value = self.construct_scalar(node)
b_ciphertext_data = to_bytes(value)
# could pass in a key id here to choose the vault to associate with
# TODO/FIXME: plugin vault selector
vault = self._vaults['default']
if vault.secrets is None:
raise ConstructorError(context=None, context_mark=None,
problem="found !vault but no vault password provided",
problem_mark=node.start_mark,
note=None)
ret = AnsibleVaultEncryptedUnicode(b_ciphertext_data)
ret.vault = vault
return ret
def construct_yaml_seq(self, node):
data = AnsibleSequence()
yield data
data.extend(self.construct_sequence(node))
data.ansible_pos = self._node_position_info(node)
def construct_yaml_unsafe(self, node):
return wrap_var(self.construct_yaml_str(node))
def _node_position_info(self, node):
# the line number where the previous token has ended (plus empty lines)
# Add one so that the first line is line 1 rather than line 0
column = node.start_mark.column + 1
line = node.start_mark.line + 1
# in some cases, we may have pre-read the data and then
# passed it to the load() call for YAML, in which case we
# want to override the default datasource (which would be
# '<string>') to the actual filename we read in
datasource = self._ansible_file_name or node.start_mark.name
return (datasource, line, column)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:map',
AnsibleConstructor.construct_yaml_map)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:python/dict',
AnsibleConstructor.construct_yaml_map)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:str',
AnsibleConstructor.construct_yaml_str)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:python/unicode',
AnsibleConstructor.construct_yaml_str)
AnsibleConstructor.add_constructor(
u'tag:yaml.org,2002:seq',
AnsibleConstructor.construct_yaml_seq)
AnsibleConstructor.add_constructor(
u'!unsafe',
AnsibleConstructor.construct_yaml_unsafe)
AnsibleConstructor.add_constructor(
u'!vault',
AnsibleConstructor.construct_vault_encrypted_unicode)
AnsibleConstructor.add_constructor(u'!vault-encrypted', AnsibleConstructor.construct_vault_encrypted_unicode)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,031 |
pipelining: wait_for_connection only tries python discovery on first connection attempt
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Same issue as (locked and closed) https://github.com/ansible/ansible/issues/63285
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
wait_for_connection
python discovery
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:21:00) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 29
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
1. start a `wait_for_connection` task
2. create host with no `/usr/bin/python`
3. observe that `wait_for_connection` fails due to not detecting python and attempting to use `/usr/bin/python`
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ansible connects successfully
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
wait_for_connection: attempting ping module test
<vmguest136> Attempting python interpreter discovery
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.122.23> (255, b'', b'ssh: connect to host 192.168.122.23 port 22: Connection refused\r\n')
[WARNING]: Unhandled error in Python interpreter discovery for host vmguest136: Failed to connect to the host via ssh: ssh: connect to host 192.168.122.23
port 22: Connection refused
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
```
|
https://github.com/ansible/ansible/issues/67031
|
https://github.com/ansible/ansible/pull/67040
|
f4a80bb600510669801c5d5c0a250952748e99fd
|
fd954a9c5c05c7149eb23271529ff070f2b1f9dc
| 2020-02-02T10:13:41Z |
python
| 2020-02-04T19:40:09Z |
changelogs/fragments/wait_for_connection-interpreter-discovery-retry.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,031 |
pipelining: wait_for_connection only tries python discovery on first connection attempt
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Same issue as (locked and closed) https://github.com/ansible/ansible/issues/63285
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
wait_for_connection
python discovery
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:21:00) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 29
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
1. start a `wait_for_connection` task
2. create host with no `/usr/bin/python`
3. observe that `wait_for_connection` fails due to not detecting python and attempting to use `/usr/bin/python`
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ansible connects successfully
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
wait_for_connection: attempting ping module test
<vmguest136> Attempting python interpreter discovery
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.122.23> (255, b'', b'ssh: connect to host 192.168.122.23 port 22: Connection refused\r\n')
[WARNING]: Unhandled error in Python interpreter discovery for host vmguest136: Failed to connect to the host via ssh: ssh: connect to host 192.168.122.23
port 22: Connection refused
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
<192.168.122.23> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.122.23> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ControlPath=/home/user/.ansible/cp/2f078e0ae4 192.168.122.23 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
wait_for_connection: attempting ping module test
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/ping.py
Pipelining is enabled.
```
|
https://github.com/ansible/ansible/issues/67031
|
https://github.com/ansible/ansible/pull/67040
|
f4a80bb600510669801c5d5c0a250952748e99fd
|
fd954a9c5c05c7149eb23271529ff070f2b1f9dc
| 2020-02-02T10:13:41Z |
python
| 2020-02-04T19:40:09Z |
lib/ansible/plugins/action/wait_for_connection.py
|
# (c) 2017, Dag Wieers <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# CI-required python3 boilerplate
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import time
from datetime import datetime, timedelta
from ansible.module_utils._text import to_text
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset(('connect_timeout', 'delay', 'sleep', 'timeout'))
DEFAULT_CONNECT_TIMEOUT = 5
DEFAULT_DELAY = 0
DEFAULT_SLEEP = 1
DEFAULT_TIMEOUT = 600
def do_until_success_or_timeout(self, what, timeout, connect_timeout, what_desc, sleep=1):
max_end_time = datetime.utcnow() + timedelta(seconds=timeout)
e = None
while datetime.utcnow() < max_end_time:
try:
what(connect_timeout)
if what_desc:
display.debug("wait_for_connection: %s success" % what_desc)
return
except Exception as e:
error = e # PY3 compatibility to store exception for use outside of this block
if what_desc:
display.debug("wait_for_connection: %s fail (expected), retrying in %d seconds..." % (what_desc, sleep))
time.sleep(sleep)
raise TimedOutException("timed out waiting for %s: %s" % (what_desc, error))
def run(self, tmp=None, task_vars=None):
if task_vars is None:
task_vars = dict()
connect_timeout = int(self._task.args.get('connect_timeout', self.DEFAULT_CONNECT_TIMEOUT))
delay = int(self._task.args.get('delay', self.DEFAULT_DELAY))
sleep = int(self._task.args.get('sleep', self.DEFAULT_SLEEP))
timeout = int(self._task.args.get('timeout', self.DEFAULT_TIMEOUT))
if self._play_context.check_mode:
display.vvv("wait_for_connection: skipping for check_mode")
return dict(skipped=True)
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
def ping_module_test(connect_timeout):
''' Test ping module, if available '''
display.vvv("wait_for_connection: attempting ping module test")
# call connection reset between runs if it's there
try:
self._connection.reset()
except AttributeError:
pass
# Use win_ping on winrm/powershell, else use ping
if getattr(self._connection._shell, "_IS_WINDOWS", False):
ping_result = self._execute_module(module_name='win_ping', module_args=dict(), task_vars=task_vars)
else:
ping_result = self._execute_module(module_name='ping', module_args=dict(), task_vars=task_vars)
# Test module output
if ping_result['ping'] != 'pong':
raise Exception('ping test failed')
start = datetime.now()
if delay:
time.sleep(delay)
try:
# If the connection has a transport_test method, use it first
if hasattr(self._connection, 'transport_test'):
self.do_until_success_or_timeout(self._connection.transport_test, timeout, connect_timeout, what_desc="connection port up", sleep=sleep)
# Use the ping module test to determine end-to-end connectivity
self.do_until_success_or_timeout(ping_module_test, timeout, connect_timeout, what_desc="ping module test", sleep=sleep)
except TimedOutException as e:
result['failed'] = True
result['msg'] = to_text(e)
elapsed = datetime.now() - start
result['elapsed'] = elapsed.seconds
# remove a temporary path we created
self._remove_tmp_path(self._connection._shell.tmpdir)
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,332 |
Add functionality to `nxos_l2_interfaces` to append the allowed vlans list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Currently, the `nxos_l2_interfaces` module does not provide an option to append vlans to that allowed vlan list. When using the `state: merged` any config that was untouched by the task remains but any configuration provided will replace the current config. This option was available in the `nxos_l2_interface` with the `trunk_vlans` parm. The module is currently sending the `switchport trunk allowed vlan <#>` and not `switchport trunk allowed vlan add <#>` so a full list of allowed vlans must be supplied.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
nxos_l2_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/bdudas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
an-nxos-02# show version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained herein are owned by
other third parties and are used and distributed under license.
Some parts of this software are covered under the GNU Public
License. A copy of the license is available at
http://www.gnu.org/licenses/gpl.html.
NX-OSv is a demo version of the Nexus Operating System
Software
loader: version N/A
kickstart: version 7.3(0)D1(1)
system: version 7.3(0)D1(1)
kickstart image file is: bootflash:///titanium-d1-kickstart.7.3.0.D1.1.bin
kickstart compile time: 1/11/2016 16:00:00 [02/11/2016 10:30:12]
system image file is: bootflash:///titanium-d1.7.3.0.D1.1.bin
system compile time: 1/11/2016 16:00:00 [02/11/2016 13:08:11]
Hardware
cisco NX-OSv Chassis ("NX-OSv Supervisor Module")
QEMU Virtual CPU version 2.5 with 3064740 kB of memory.
Processor Board ID TM000B0000B
Device name: an-nxos-02
bootflash: 3184776 kB
Kernel uptime is 0 day(s), 2 hour(s), 44 minute(s), 51 second(s)
plugin
Core Plugin, Ethernet Plugin
Active Package(s)
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: nxos
gather_facts: false
vars:
new_vlans: 200
tasks:
- name: add vlans to allowed list
nxos_l2_interfaces:
config:
- name: Ethernet2/31
trunk:
allowed_vlans: "{{ new_vlans }}"
state: merged
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Starting Interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 15:57:00 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 101,104-106
```
Post Job interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 16:04:53 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 200
```
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.1
config file = /Users/fubar/dev-net/nxos/ansible.cfg
configured module search path = ['/Users/fubar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
Using /Users/fubar/dev-net/nxos/ansible.cfg as config file
host_list declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
script declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
auto declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
Parsed /Users/fubar/dev-net/inventory/hosts inventory source with ini plugin
_____________________________
< PLAYBOOK: allowed_vlans.yml >
-----------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
1 plays in allowed_vlans.yml
_____________
< PLAY [nxos] >
-------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
META: ran handlers
__________________________________
< TASK [add vlans to allowed list] >
----------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /Users/fubar/dev-net/nxos/allowed_vlans.yml:8
<nxos-01> ESTABLISH LOCAL CONNECTION FOR USER: fubar
<nxos-01> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" && echo ansible-tmp-1574871651.6025379-270697238693828="` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" ) && sleep 0'
<nxos-01> Attempting python interpreter discovery
<nxos-01> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<nxos-01> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin)
Using module file /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/modules/network/nxos/nxos_l2_interfaces.py
<nxos-01> PUT /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/tmpjicqhgg0 TO /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py
<nxos-01> EXEC /bin/sh -c 'chmod u+x /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c '/usr/bin/python /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c 'rm -f -r /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ > /dev/null 2>&1 && sleep 0'
[WARNING]: The value 200 (type int) in a string field was converted to u'200' (type string). If this does not look like
what you expect, quote the entire value to ensure it does not change.
[WARNING]: Platform darwin on host nxos-01 is using the discovered Python interpreter at
/usr/bin/python, but future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
changed: [nxos-01] => {
"after": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": 99
}
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "101,104,105,106",
"native_vlan": 99
}
}
],
"changed": true,
"commands": [
"interface Ethernet2/31",
"switchport trunk allowed vlan 200"
],
"invocation": {
"module_args": {
"config": [
{
"access": null,
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": null
}
}
],
"state": "merged"
}
}
}
META: ran handlers
META: ran handlers
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
nxos-01 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65332
|
https://github.com/ansible/ansible/pull/66517
|
fd954a9c5c05c7149eb23271529ff070f2b1f9dc
|
4ac89b8ac7120f553c78eafb294c045f3baa8792
| 2019-11-27T16:31:18Z |
python
| 2020-02-04T20:14:04Z |
lib/ansible/module_utils/network/nxos/config/l2_interfaces/l2_interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The nxos_l2_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import dict_diff, to_list, remove_empties
from ansible.module_utils.network.nxos.facts.facts import Facts
from ansible.module_utils.network.nxos.utils.utils import flatten_dict, normalize_interface, search_obj_in_list, vlan_range_to_list
class L2_interfaces(ConfigBase):
"""
The nxos_l2_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'l2_interfaces',
]
exclude_params = [
'vlan',
'allowed_vlans',
'native_vlans',
]
def __init__(self, module):
super(L2_interfaces, self).__init__(module)
def get_l2_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
l2_interfaces_facts = facts['ansible_network_resources'].get('l2_interfaces')
if not l2_interfaces_facts:
return []
return l2_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
commands = list()
warnings = list()
existing_l2_interfaces_facts = self.get_l2_interfaces_facts()
commands.extend(self.set_config(existing_l2_interfaces_facts))
if commands:
if not self._module.check_mode:
self._connection.edit_config(commands)
result['changed'] = True
result['commands'] = commands
changed_l2_interfaces_facts = self.get_l2_interfaces_facts()
result['before'] = existing_l2_interfaces_facts
if result['changed']:
result['after'] = changed_l2_interfaces_facts
result['warnings'] = warnings
return result
def set_config(self, existing_l2_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
config = self._module.params.get('config')
want = []
if config:
for w in config:
w.update({'name': normalize_interface(w['name'])})
self.expand_trunk_allowed_vlans(w)
want.append(remove_empties(w))
have = existing_l2_interfaces_facts
for h in have:
self.expand_trunk_allowed_vlans(h)
resp = self.set_state(want, have)
return to_list(resp)
def expand_trunk_allowed_vlans(self, d):
if not d:
return None
if 'trunk' in d and d['trunk']:
if 'allowed_vlans' in d['trunk']:
allowed_vlans = vlan_range_to_list(d['trunk']['allowed_vlans'])
vlans_list = [str(l) for l in sorted(allowed_vlans)]
d['trunk']['allowed_vlans'] = ",".join(vlans_list)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
state = self._module.params['state']
if state in ('overridden', 'merged', 'replaced') and not want:
self._module.fail_json(msg='config is required for state {0}'.format(state))
commands = list()
if state == 'overridden':
commands.extend(self._state_overridden(want, have))
elif state == 'deleted':
commands.extend(self._state_deleted(want, have))
else:
for w in want:
if state == 'merged':
commands.extend(self._state_merged(flatten_dict(w), have))
elif state == 'replaced':
commands.extend(self._state_replaced(flatten_dict(w), have))
return commands
def _state_replaced(self, w, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
obj_in_have = flatten_dict(search_obj_in_list(w['name'], have, 'name'))
if obj_in_have:
diff = dict_diff(w, obj_in_have)
else:
diff = w
merged_commands = self.set_commands(w, have)
if 'name' not in diff:
diff['name'] = w['name']
wkeys = w.keys()
dkeys = diff.keys()
for k in wkeys:
if k in self.exclude_params and k in dkeys:
del diff[k]
replaced_commands = self.del_attribs(diff)
if merged_commands:
cmds = set(replaced_commands).intersection(set(merged_commands))
for cmd in cmds:
merged_commands.remove(cmd)
commands.extend(replaced_commands)
commands.extend(merged_commands)
return commands
def _state_overridden(self, want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for h in have:
h = flatten_dict(h)
obj_in_want = flatten_dict(search_obj_in_list(h['name'], want, 'name'))
if h == obj_in_want:
continue
for w in want:
w = flatten_dict(w)
if h['name'] == w['name']:
wkeys = w.keys()
hkeys = h.keys()
for k in wkeys:
if k in self.exclude_params and k in hkeys:
del h[k]
commands.extend(self.del_attribs(h))
for w in want:
commands.extend(self.set_commands(flatten_dict(w), have))
return commands
def _state_merged(self, w, have):
""" The command generator when state is merged
:rtype: A list
:returns: the commands necessary to merge the provided into
the current configuration
"""
return self.set_commands(w, have)
def _state_deleted(self, want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
commands = []
if want:
for w in want:
obj_in_have = flatten_dict(search_obj_in_list(w['name'], have, 'name'))
commands.extend(self.del_attribs(obj_in_have))
else:
if not have:
return commands
for h in have:
commands.extend(self.del_attribs(flatten_dict(h)))
return commands
def del_attribs(self, obj):
commands = []
if not obj or len(obj.keys()) == 1:
return commands
cmd = 'no switchport '
if 'vlan' in obj:
commands.append(cmd + 'access vlan')
if 'allowed_vlans' in obj:
commands.append(cmd + 'trunk allowed vlan')
if 'native_vlan' in obj:
commands.append(cmd + 'trunk native vlan')
if commands:
commands.insert(0, 'interface ' + obj['name'])
return commands
def diff_of_dicts(self, w, obj):
diff = set(w.items()) - set(obj.items())
diff = dict(diff)
if diff and w['name'] == obj['name']:
diff.update({'name': w['name']})
return diff
def add_commands(self, d):
commands = []
if not d:
return commands
cmd = 'switchport '
if 'vlan' in d:
commands.append(cmd + 'access vlan ' + str(d['vlan']))
if 'allowed_vlans' in d:
commands.append(cmd + 'trunk allowed vlan ' + str(d['allowed_vlans']))
if 'native_vlan' in d:
commands.append(cmd + 'trunk native vlan ' + str(d['native_vlan']))
if commands:
commands.insert(0, 'interface ' + d['name'])
return commands
def set_commands(self, w, have):
commands = []
obj_in_have = flatten_dict(search_obj_in_list(w['name'], have, 'name'))
if not obj_in_have:
commands = self.add_commands(w)
else:
diff = self.diff_of_dicts(w, obj_in_have)
commands = self.add_commands(diff)
return commands
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,332 |
Add functionality to `nxos_l2_interfaces` to append the allowed vlans list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Currently, the `nxos_l2_interfaces` module does not provide an option to append vlans to that allowed vlan list. When using the `state: merged` any config that was untouched by the task remains but any configuration provided will replace the current config. This option was available in the `nxos_l2_interface` with the `trunk_vlans` parm. The module is currently sending the `switchport trunk allowed vlan <#>` and not `switchport trunk allowed vlan add <#>` so a full list of allowed vlans must be supplied.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
nxos_l2_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/bdudas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
an-nxos-02# show version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained herein are owned by
other third parties and are used and distributed under license.
Some parts of this software are covered under the GNU Public
License. A copy of the license is available at
http://www.gnu.org/licenses/gpl.html.
NX-OSv is a demo version of the Nexus Operating System
Software
loader: version N/A
kickstart: version 7.3(0)D1(1)
system: version 7.3(0)D1(1)
kickstart image file is: bootflash:///titanium-d1-kickstart.7.3.0.D1.1.bin
kickstart compile time: 1/11/2016 16:00:00 [02/11/2016 10:30:12]
system image file is: bootflash:///titanium-d1.7.3.0.D1.1.bin
system compile time: 1/11/2016 16:00:00 [02/11/2016 13:08:11]
Hardware
cisco NX-OSv Chassis ("NX-OSv Supervisor Module")
QEMU Virtual CPU version 2.5 with 3064740 kB of memory.
Processor Board ID TM000B0000B
Device name: an-nxos-02
bootflash: 3184776 kB
Kernel uptime is 0 day(s), 2 hour(s), 44 minute(s), 51 second(s)
plugin
Core Plugin, Ethernet Plugin
Active Package(s)
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: nxos
gather_facts: false
vars:
new_vlans: 200
tasks:
- name: add vlans to allowed list
nxos_l2_interfaces:
config:
- name: Ethernet2/31
trunk:
allowed_vlans: "{{ new_vlans }}"
state: merged
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Starting Interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 15:57:00 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 101,104-106
```
Post Job interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 16:04:53 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 200
```
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.1
config file = /Users/fubar/dev-net/nxos/ansible.cfg
configured module search path = ['/Users/fubar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
Using /Users/fubar/dev-net/nxos/ansible.cfg as config file
host_list declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
script declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
auto declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
Parsed /Users/fubar/dev-net/inventory/hosts inventory source with ini plugin
_____________________________
< PLAYBOOK: allowed_vlans.yml >
-----------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
1 plays in allowed_vlans.yml
_____________
< PLAY [nxos] >
-------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
META: ran handlers
__________________________________
< TASK [add vlans to allowed list] >
----------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /Users/fubar/dev-net/nxos/allowed_vlans.yml:8
<nxos-01> ESTABLISH LOCAL CONNECTION FOR USER: fubar
<nxos-01> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" && echo ansible-tmp-1574871651.6025379-270697238693828="` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" ) && sleep 0'
<nxos-01> Attempting python interpreter discovery
<nxos-01> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<nxos-01> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin)
Using module file /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/modules/network/nxos/nxos_l2_interfaces.py
<nxos-01> PUT /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/tmpjicqhgg0 TO /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py
<nxos-01> EXEC /bin/sh -c 'chmod u+x /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c '/usr/bin/python /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c 'rm -f -r /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ > /dev/null 2>&1 && sleep 0'
[WARNING]: The value 200 (type int) in a string field was converted to u'200' (type string). If this does not look like
what you expect, quote the entire value to ensure it does not change.
[WARNING]: Platform darwin on host nxos-01 is using the discovered Python interpreter at
/usr/bin/python, but future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
changed: [nxos-01] => {
"after": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": 99
}
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "101,104,105,106",
"native_vlan": 99
}
}
],
"changed": true,
"commands": [
"interface Ethernet2/31",
"switchport trunk allowed vlan 200"
],
"invocation": {
"module_args": {
"config": [
{
"access": null,
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": null
}
}
],
"state": "merged"
}
}
}
META: ran handlers
META: ran handlers
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
nxos-01 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65332
|
https://github.com/ansible/ansible/pull/66517
|
fd954a9c5c05c7149eb23271529ff070f2b1f9dc
|
4ac89b8ac7120f553c78eafb294c045f3baa8792
| 2019-11-27T16:31:18Z |
python
| 2020-02-04T20:14:04Z |
test/integration/targets/nxos_l2_interfaces/tests/cli/merged.yaml
|
---
- debug:
msg: "Start nxos_l2_interfaces merged integration tests connection={{ ansible_connection }}"
- set_fact: test_int1="{{ nxos_int1 }}"
- name: setup
cli_config: &cleanup
config: |
default interface {{ test_int1 }}
ignore_errors: yes
- block:
- name: setup2
cli_config:
config: |
interface {{ test_int1 }}
switchport
- name: Merged
nxos_l2_interfaces: &merged
config:
- name: "{{ test_int1 }}"
access:
vlan: 6
state: merged
register: result
- assert:
that:
- "result.changed == true"
- "result.before|length == 0"
- "'interface {{ test_int1 }}' in result.commands"
- "'switchport access vlan 6' in result.commands"
- "result.commands|length == 2"
- name: Gather l2_interfaces facts
nxos_facts:
gather_subset:
- '!all'
- '!min'
gather_network_resources: l2_interfaces
- assert:
that:
- "ansible_facts.network_resources.l2_interfaces|symmetric_difference(result.after)|length == 0"
- name: Idempotence - Merged
nxos_l2_interfaces: *merged
register: result
- assert:
that:
- "result.changed == false"
- "result.commands|length == 0"
always:
- name: teardown
cli_config: *cleanup
ignore_errors: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,332 |
Add functionality to `nxos_l2_interfaces` to append the allowed vlans list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Currently, the `nxos_l2_interfaces` module does not provide an option to append vlans to that allowed vlan list. When using the `state: merged` any config that was untouched by the task remains but any configuration provided will replace the current config. This option was available in the `nxos_l2_interface` with the `trunk_vlans` parm. The module is currently sending the `switchport trunk allowed vlan <#>` and not `switchport trunk allowed vlan add <#>` so a full list of allowed vlans must be supplied.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
nxos_l2_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/bdudas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
an-nxos-02# show version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained herein are owned by
other third parties and are used and distributed under license.
Some parts of this software are covered under the GNU Public
License. A copy of the license is available at
http://www.gnu.org/licenses/gpl.html.
NX-OSv is a demo version of the Nexus Operating System
Software
loader: version N/A
kickstart: version 7.3(0)D1(1)
system: version 7.3(0)D1(1)
kickstart image file is: bootflash:///titanium-d1-kickstart.7.3.0.D1.1.bin
kickstart compile time: 1/11/2016 16:00:00 [02/11/2016 10:30:12]
system image file is: bootflash:///titanium-d1.7.3.0.D1.1.bin
system compile time: 1/11/2016 16:00:00 [02/11/2016 13:08:11]
Hardware
cisco NX-OSv Chassis ("NX-OSv Supervisor Module")
QEMU Virtual CPU version 2.5 with 3064740 kB of memory.
Processor Board ID TM000B0000B
Device name: an-nxos-02
bootflash: 3184776 kB
Kernel uptime is 0 day(s), 2 hour(s), 44 minute(s), 51 second(s)
plugin
Core Plugin, Ethernet Plugin
Active Package(s)
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: nxos
gather_facts: false
vars:
new_vlans: 200
tasks:
- name: add vlans to allowed list
nxos_l2_interfaces:
config:
- name: Ethernet2/31
trunk:
allowed_vlans: "{{ new_vlans }}"
state: merged
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Starting Interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 15:57:00 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 101,104-106
```
Post Job interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 16:04:53 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 200
```
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.1
config file = /Users/fubar/dev-net/nxos/ansible.cfg
configured module search path = ['/Users/fubar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
Using /Users/fubar/dev-net/nxos/ansible.cfg as config file
host_list declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
script declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
auto declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
Parsed /Users/fubar/dev-net/inventory/hosts inventory source with ini plugin
_____________________________
< PLAYBOOK: allowed_vlans.yml >
-----------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
1 plays in allowed_vlans.yml
_____________
< PLAY [nxos] >
-------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
META: ran handlers
__________________________________
< TASK [add vlans to allowed list] >
----------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /Users/fubar/dev-net/nxos/allowed_vlans.yml:8
<nxos-01> ESTABLISH LOCAL CONNECTION FOR USER: fubar
<nxos-01> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" && echo ansible-tmp-1574871651.6025379-270697238693828="` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" ) && sleep 0'
<nxos-01> Attempting python interpreter discovery
<nxos-01> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<nxos-01> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin)
Using module file /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/modules/network/nxos/nxos_l2_interfaces.py
<nxos-01> PUT /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/tmpjicqhgg0 TO /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py
<nxos-01> EXEC /bin/sh -c 'chmod u+x /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c '/usr/bin/python /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c 'rm -f -r /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ > /dev/null 2>&1 && sleep 0'
[WARNING]: The value 200 (type int) in a string field was converted to u'200' (type string). If this does not look like
what you expect, quote the entire value to ensure it does not change.
[WARNING]: Platform darwin on host nxos-01 is using the discovered Python interpreter at
/usr/bin/python, but future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
changed: [nxos-01] => {
"after": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": 99
}
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "101,104,105,106",
"native_vlan": 99
}
}
],
"changed": true,
"commands": [
"interface Ethernet2/31",
"switchport trunk allowed vlan 200"
],
"invocation": {
"module_args": {
"config": [
{
"access": null,
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": null
}
}
],
"state": "merged"
}
}
}
META: ran handlers
META: ran handlers
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
nxos-01 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65332
|
https://github.com/ansible/ansible/pull/66517
|
fd954a9c5c05c7149eb23271529ff070f2b1f9dc
|
4ac89b8ac7120f553c78eafb294c045f3baa8792
| 2019-11-27T16:31:18Z |
python
| 2020-02-04T20:14:04Z |
test/integration/targets/nxos_l2_interfaces/tests/cli/overridden.yaml
|
---
- debug:
msg: "Start nxos_l2_interfaces overridden integration tests connection={{ ansible_connection }}"
- set_fact: test_int1="{{ nxos_int1 }}"
- set_fact: test_int2="{{ nxos_int2 }}"
- name: setup1
cli_config: &cleanup
config: |
default interface {{ test_int1 }}
default interface {{ test_int2 }}
ignore_errors: yes
- block:
- name: setup2
cli_config:
config: |
interface {{ test_int1 }}
switchport
switchport trunk allowed vlan 11
interface {{ test_int2 }}
switchport
- name: Gather l2_interfaces facts
nxos_facts: &facts
gather_subset:
- '!all'
- '!min'
gather_network_resources: l2_interfaces
- name: Overridden
nxos_l2_interfaces: &overridden
config:
- name: "{{ test_int2 }}"
access:
vlan: 6
state: overridden
register: result
- assert:
that:
- "ansible_facts.network_resources.l2_interfaces|symmetric_difference(result.before)|length == 0"
- "result.changed == true"
- "'interface {{ test_int1 }}' in result.commands"
- "'no switchport trunk allowed vlan' in result.commands"
- "'interface {{ test_int2 }}' in result.commands"
- "'switchport access vlan 6' in result.commands"
- name: Gather l2_interfaces post facts
nxos_facts: *facts
- assert:
that:
- "ansible_facts.network_resources.l2_interfaces|symmetric_difference(result.after)|length == 0"
- name: Idempotence - Overridden
nxos_l2_interfaces: *overridden
register: result
- assert:
that:
- "result.changed == false"
- "result.commands|length == 0"
always:
- name: teardown
cli_config: *cleanup
ignore_errors: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,332 |
Add functionality to `nxos_l2_interfaces` to append the allowed vlans list
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Currently, the `nxos_l2_interfaces` module does not provide an option to append vlans to that allowed vlan list. When using the `state: merged` any config that was untouched by the task remains but any configuration provided will replace the current config. This option was available in the `nxos_l2_interface` with the `trunk_vlans` parm. The module is currently sending the `switchport trunk allowed vlan <#>` and not `switchport trunk allowed vlan add <#>` so a full list of allowed vlans must be supplied.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
nxos_l2_interfaces
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/bdudas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
an-nxos-02# show version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained herein are owned by
other third parties and are used and distributed under license.
Some parts of this software are covered under the GNU Public
License. A copy of the license is available at
http://www.gnu.org/licenses/gpl.html.
NX-OSv is a demo version of the Nexus Operating System
Software
loader: version N/A
kickstart: version 7.3(0)D1(1)
system: version 7.3(0)D1(1)
kickstart image file is: bootflash:///titanium-d1-kickstart.7.3.0.D1.1.bin
kickstart compile time: 1/11/2016 16:00:00 [02/11/2016 10:30:12]
system image file is: bootflash:///titanium-d1.7.3.0.D1.1.bin
system compile time: 1/11/2016 16:00:00 [02/11/2016 13:08:11]
Hardware
cisco NX-OSv Chassis ("NX-OSv Supervisor Module")
QEMU Virtual CPU version 2.5 with 3064740 kB of memory.
Processor Board ID TM000B0000B
Device name: an-nxos-02
bootflash: 3184776 kB
Kernel uptime is 0 day(s), 2 hour(s), 44 minute(s), 51 second(s)
plugin
Core Plugin, Ethernet Plugin
Active Package(s)
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: nxos
gather_facts: false
vars:
new_vlans: 200
tasks:
- name: add vlans to allowed list
nxos_l2_interfaces:
config:
- name: Ethernet2/31
trunk:
allowed_vlans: "{{ new_vlans }}"
state: merged
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Starting Interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 15:57:00 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 101,104-106
```
Post Job interface Config:
```
an-nxos-02# show run interface Ethernet 2/31
!Command: show running-config interface Ethernet2/31
!Time: Wed Nov 27 16:04:53 2019
version 7.3(0)D1(1)
interface Ethernet2/31
shutdown
switchport
switchport trunk native vlan 99
switchport trunk allowed vlan 200
```
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook 2.9.1
config file = /Users/fubar/dev-net/nxos/ansible.cfg
configured module search path = ['/Users/fubar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
Using /Users/fubar/dev-net/nxos/ansible.cfg as config file
host_list declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
script declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
auto declined parsing /Users/fubar/dev-net/inventory/hosts as it did not pass its verify_file() method
Parsed /Users/fubar/dev-net/inventory/hosts inventory source with ini plugin
_____________________________
< PLAYBOOK: allowed_vlans.yml >
-----------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
1 plays in allowed_vlans.yml
_____________
< PLAY [nxos] >
-------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
META: ran handlers
__________________________________
< TASK [add vlans to allowed list] >
----------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /Users/fubar/dev-net/nxos/allowed_vlans.yml:8
<nxos-01> ESTABLISH LOCAL CONNECTION FOR USER: fubar
<nxos-01> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" && echo ansible-tmp-1574871651.6025379-270697238693828="` echo /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828 `" ) && sleep 0'
<nxos-01> Attempting python interpreter discovery
<nxos-01> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<nxos-01> Python interpreter discovery fallback (unsupported platform for extended discovery: darwin)
Using module file /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible/modules/network/nxos/nxos_l2_interfaces.py
<nxos-01> PUT /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/tmpjicqhgg0 TO /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py
<nxos-01> EXEC /bin/sh -c 'chmod u+x /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c '/usr/bin/python /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/AnsiballZ_nxos_l2_interfaces.py && sleep 0'
<nxos-01> EXEC /bin/sh -c 'rm -f -r /Users/fubar/.ansible/tmp/ansible-local-351511mqamcek/ansible-tmp-1574871651.6025379-270697238693828/ > /dev/null 2>&1 && sleep 0'
[WARNING]: The value 200 (type int) in a string field was converted to u'200' (type string). If this does not look like
what you expect, quote the entire value to ensure it does not change.
[WARNING]: Platform darwin on host nxos-01 is using the discovered Python interpreter at
/usr/bin/python, but future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
changed: [nxos-01] => {
"after": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": 99
}
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "101,104,105,106",
"native_vlan": 99
}
}
],
"changed": true,
"commands": [
"interface Ethernet2/31",
"switchport trunk allowed vlan 200"
],
"invocation": {
"module_args": {
"config": [
{
"access": null,
"name": "Ethernet2/31",
"trunk": {
"allowed_vlans": "200",
"native_vlan": null
}
}
],
"state": "merged"
}
}
}
META: ran handlers
META: ran handlers
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
nxos-01 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65332
|
https://github.com/ansible/ansible/pull/66517
|
fd954a9c5c05c7149eb23271529ff070f2b1f9dc
|
4ac89b8ac7120f553c78eafb294c045f3baa8792
| 2019-11-27T16:31:18Z |
python
| 2020-02-04T20:14:04Z |
test/integration/targets/nxos_l2_interfaces/tests/cli/replaced.yaml
|
---
- debug:
msg: "Start nxos_l2_interfaces replaced integration tests connection={{ ansible_connection }}"
- set_fact: test_int1="{{ nxos_int1 }}"
- set_fact: test_int2="{{ nxos_int2 }}"
- name: setup1
cli_config: &cleanup
config: |
default interface {{ test_int1 }}
default interface {{ test_int2 }}
ignore_errors: yes
- block:
- name: setup2
cli_config:
config: |
interface {{ test_int1 }}
switchport
switchport access vlan 5
interface {{ test_int2 }}
switchport
switchport trunk native vlan 15
- name: Gather l2_interfaces facts
nxos_facts: &facts
gather_subset:
- '!all'
- '!min'
gather_network_resources: l2_interfaces
- name: Replaced
nxos_l2_interfaces: &replaced
config:
- name: "{{ test_int1 }}"
access:
vlan: 8
state: replaced
register: result
- assert:
that:
- "ansible_facts.network_resources.l2_interfaces|symmetric_difference(result.before)|length == 0"
- "result.changed == true"
- "'interface {{ test_int1 }}' in result.commands"
- "'switchport access vlan 8' in result.commands"
- "result.commands|length == 2"
- name: Gather l2_interfaces post facts
nxos_facts: *facts
- assert:
that:
- "ansible_facts.network_resources.l2_interfaces|symmetric_difference(result.after)|length == 0"
- name: Idempotence - Replaced
nxos_l2_interfaces: *replaced
register: result
- assert:
that:
- "result.changed == false"
- "result.commands|length == 0"
always:
- name: teardown
cli_config: *cleanup
ignore_errors: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,015 |
win_stat has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_stat.ps1:0:0: ansible-deprecated-version: Argument 'get_md5' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_stat.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67015
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:51:09Z |
python
| 2020-02-04T23:02:04Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,015 |
win_stat has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_stat.ps1:0:0: ansible-deprecated-version: Argument 'get_md5' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_stat.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67015
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:51:09Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_psexec.ps1
|
#!powershell
# Copyright: (c) 2017, Dag Wieers (@dagwieers) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#Requires -Module Ansible.ModuleUtils.ArgvParser
#Requires -Module Ansible.ModuleUtils.CommandUtil
# See also: https://technet.microsoft.com/en-us/sysinternals/pxexec.aspx
$spec = @{
options = @{
command = @{ type='str'; required=$true }
executable = @{ type='path'; default='psexec.exe' }
hostnames = @{ type='list' }
username = @{ type='str' }
password = @{ type='str'; no_log=$true }
chdir = @{ type='path' }
wait = @{ type='bool'; default=$true }
nobanner = @{ type='bool'; default=$false }
noprofile = @{ type='bool'; default=$false }
elevated = @{ type='bool'; default=$false }
limited = @{ type='bool'; default=$false }
system = @{ type='bool'; default=$false }
interactive = @{ type='bool'; default=$false }
session = @{ type='int' }
priority = @{ type='str'; choices=@( 'background', 'low', 'belownormal', 'abovenormal', 'high', 'realtime' ) }
timeout = @{ type='int' }
extra_opts = @{ type='list'; removed_in_version = '2.10' }
}
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$command = $module.Params.command
$executable = $module.Params.executable
$hostnames = $module.Params.hostnames
$username = $module.Params.username
$password = $module.Params.password
$chdir = $module.Params.chdir
$wait = $module.Params.wait
$nobanner = $module.Params.nobanner
$noprofile = $module.Params.noprofile
$elevated = $module.Params.elevated
$limited = $module.Params.limited
$system = $module.Params.system
$interactive = $module.Params.interactive
$session = $module.Params.session
$priority = $module.Params.Priority
$timeout = $module.Params.timeout
$extra_opts = $module.Params.extra_opts
$module.Result.changed = $true
If (-Not (Get-Command $executable -ErrorAction SilentlyContinue)) {
$module.FailJson("Executable '$executable' was not found.")
}
$arguments = [System.Collections.Generic.List`1[String]]@($executable)
If ($nobanner -eq $true) {
$arguments.Add("-nobanner")
}
# Support running on local system if no hostname is specified
If ($hostnames) {
$hostname_argument = ($hostnames | sort -Unique) -join ','
$arguments.Add("\\$hostname_argument")
}
# Username is optional
If ($null -ne $username) {
$arguments.Add("-u")
$arguments.Add($username)
}
# Password is optional
If ($null -ne $password) {
$arguments.Add("-p")
$arguments.Add($password)
}
If ($null -ne $chdir) {
$arguments.Add("-w")
$arguments.Add($chdir)
}
If ($wait -eq $false) {
$arguments.Add("-d")
}
If ($noprofile -eq $true) {
$arguments.Add("-e")
}
If ($elevated -eq $true) {
$arguments.Add("-h")
}
If ($system -eq $true) {
$arguments.Add("-s")
}
If ($interactive -eq $true) {
$arguments.Add("-i")
If ($null -ne $session) {
$arguments.Add($session)
}
}
If ($limited -eq $true) {
$arguments.Add("-l")
}
If ($null -ne $priority) {
$arguments.Add("-$priority")
}
If ($null -ne $timeout) {
$arguments.Add("-n")
$arguments.Add($timeout)
}
# Add additional advanced options
If ($extra_opts) {
ForEach ($opt in $extra_opts) {
$arguments.Add($opt)
}
}
$arguments.Add("-accepteula")
$argument_string = Argv-ToString -arguments $arguments
# Add the command at the end of the argument string, we don't want to escape
# that as psexec doesn't expect it to be one arg
$argument_string += " $command"
$start_datetime = [DateTime]::UtcNow
$module.Result.psexec_command = $argument_string
$command_result = Run-Command -command $argument_string
$end_datetime = [DateTime]::UtcNow
$module.Result.stdout = $command_result.stdout
$module.Result.stderr = $command_result.stderr
If ($wait -eq $true) {
$module.Result.rc = $command_result.rc
} else {
$module.Result.rc = 0
$module.Result.pid = $command_result.rc
}
$module.Result.start = $start_datetime.ToString("yyyy-MM-dd hh:mm:ss.ffffff")
$module.Result.end = $end_datetime.ToString("yyyy-MM-dd hh:mm:ss.ffffff")
$module.Result.delta = $($end_datetime - $start_datetime).ToString("h\:mm\:ss\.ffffff")
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,015 |
win_stat has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_stat.ps1:0:0: ansible-deprecated-version: Argument 'get_md5' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_stat.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67015
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:51:09Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_psexec.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: 2017, Dag Wieers (@dagwieers) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_psexec
version_added: '2.3'
short_description: Runs commands (remotely) as another (privileged) user
description:
- Run commands (remotely) through the PsExec service.
- Run commands as another (domain) user (with elevated privileges).
requirements:
- Microsoft PsExec
options:
command:
description:
- The command line to run through PsExec (limited to 260 characters).
type: str
required: yes
executable:
description:
- The location of the PsExec utility (in case it is not located in your PATH).
type: path
default: psexec.exe
extra_opts:
description:
- Specify additional options to add onto the PsExec invocation.
- This module was undocumented in older releases and will be removed in
Ansible 2.10.
type: list
hostnames:
description:
- The hostnames to run the command.
- If not provided, the command is run locally.
type: list
username:
description:
- The (remote) user to run the command as.
- If not provided, the current user is used.
type: str
password:
description:
- The password for the (remote) user to run the command as.
- This is mandatory in order authenticate yourself.
type: str
chdir:
description:
- Run the command from this (remote) directory.
type: path
nobanner:
description:
- Do not display the startup banner and copyright message.
- This only works for specific versions of the PsExec binary.
type: bool
default: no
version_added: '2.4'
noprofile:
description:
- Run the command without loading the account's profile.
type: bool
default: no
elevated:
description:
- Run the command with elevated privileges.
type: bool
default: no
interactive:
description:
- Run the program so that it interacts with the desktop on the remote system.
type: bool
default: no
session:
description:
- Specifies the session ID to use.
- This parameter works in conjunction with I(interactive).
- It has no effect when I(interactive) is set to C(no).
type: int
version_added: '2.7'
limited:
description:
- Run the command as limited user (strips the Administrators group and allows only privileges assigned to the Users group).
type: bool
default: no
system:
description:
- Run the remote command in the System account.
type: bool
default: no
priority:
description:
- Used to run the command at a different priority.
choices: [ abovenormal, background, belownormal, high, low, realtime ]
timeout:
description:
- The connection timeout in seconds
type: int
wait:
description:
- Wait for the application to terminate.
- Only use for non-interactive applications.
type: bool
default: yes
notes:
- More information related to Microsoft PsExec is available from
U(https://technet.microsoft.com/en-us/sysinternals/bb897553.aspx)
seealso:
- module: psexec
- module: raw
- module: win_command
- module: win_shell
author:
- Dag Wieers (@dagwieers)
'''
EXAMPLES = r'''
- name: Test the PsExec connection to the local system (target node) with your user
win_psexec:
command: whoami.exe
- name: Run regedit.exe locally (on target node) as SYSTEM and interactively
win_psexec:
command: regedit.exe
interactive: yes
system: yes
- name: Run the setup.exe installer on multiple servers using the Domain Administrator
win_psexec:
command: E:\setup.exe /i /IACCEPTEULA
hostnames:
- remote_server1
- remote_server2
username: DOMAIN\Administrator
password: some_password
priority: high
- name: Run PsExec from custom location C:\Program Files\sysinternals\
win_psexec:
command: netsh advfirewall set allprofiles state off
executable: C:\Program Files\sysinternals\psexec.exe
hostnames: [ remote_server ]
password: some_password
priority: low
'''
RETURN = r'''
cmd:
description: The complete command line used by the module, including PsExec call and additional options.
returned: always
type: str
sample: psexec.exe -nobanner \\remote_server -u "DOMAIN\Administrator" -p "some_password" -accepteula E:\setup.exe
pid:
description: The PID of the async process created by PsExec.
returned: when C(wait=False)
type: int
sample: 1532
rc:
description: The return code for the command.
returned: always
type: int
sample: 0
stdout:
description: The standard output from the command.
returned: always
type: str
sample: Success.
stderr:
description: The error output from the command.
returned: always
type: str
sample: Error 15 running E:\setup.exe
'''
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.